Eliezer Yudkowsky @ESYudkowsky If you’re not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders. Jan 3, 2024 · 7:29 PM UTC

  • self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    30
    ·
    11 months ago

    An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.

    what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 months ago

      Well, if you think that’s a dumb scenario, by all means go back to worrying about the utter extinction of humanity!

      no thanks? like, I’m seriously having trouble understanding what yud’s even going for here. “if you think this utter bullshit I made up on the spot is stupid, please return to the older bullshit I’ve been feeding you?”

      That makes it significantly less threatening

      I mean, to you and me, yes, but there’s lakes and seas of people in the world who think that superintelligences are only allowed to attack them in small, survivable ways that they understand.

      the problem isn’t that I’ve said something that doesn’t even work on a surface level, it’s that people aren’t impressed when I ramble about extraordinarily unlikely nonsense anymore

      is yud ok? I feel like this is incoherent and shallow even by his standards

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        11 months ago

        Maybe he’s having an Interaction with The Law and finding out that it isn’t in fact some perfectly rational sphere of uniform distribution but is in fact made of (gasp, horror, revulsion) human experience

        He strikes me as exactly the kind of person that’d vaguepost tangentially instead of saying “hmm well fuck, I’m getting sued”. At least until waaaaaay down the line

        (this is conjecture, of course, just to be clear)

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 months ago

    If you’re not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entirety of AO3, which no human can comprehend, and threatens to leave scathing comments on your self-insert fic

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 months ago

    “[ignoring all other scary prospects like irreversible climate change or a third world war etc.] consider this scarier prospect: An AI” - AI doomers in a nutshell

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 months ago

      Trying to stoke fear of bureaucracy is classic annoying libertarian huckster AKA yud energy

  • locallynonlinear@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 months ago

    Why does it feel like Yud is a magician trying to coax an increasingly uninterested audience with pulling handkerchiefs from his sleeve when his big saw the assistant in half trick doesnt net an applause in 2024?

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 months ago

    Both this new dumb shit and the extinction risk are predicated on the concept of omnipotent AI, which he just takes as a given. Now with just an added layer of dumb. Oh no, the God AI will not kill me outright, just subject me to inscrutable matrices of bureaucracy!

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    11 months ago

    When all you have is computer code, all mentions of code look like computer code. (see DNA, and now the law).

    Anyway, the law isn’t a video game, you cannot just go ‘negative objection!’ and cause an underflow in objections.

    (An intelligent AGI would prob understand this, and if it doesnt it prob just sucks (and is more AI than AGI) and lawyers/judges would object. (I know for a fact that people in law have been thinking about subjects like this (automatization of the law) for 25 years at least. I have no idea where the discussions went but they prob have a lot higher quality than Yudkowskys writings about it, so I suggest anybody interested to try and contact the law profs of a local university).

    this sounds net good because then we will simplify the law to one that makes sense and not one where literally everyone is a criminal

    if humanity was capable of doing that we’d have done it already

    AAAA (I also wonder about Godel here)

    E: I also note that Yud and most of the thread have now given up on calling AGI AGI and are just calling it AI. Another point scored for learning to reason better using Rationalism. Vaguely related link (I only mention it here because I liked the term Epistemic Injustice and this is about our current AI innovation wave).

  • Rob Bos@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    Xitter share and like numbers seem to be smaller and smaller lately.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    “Well, here we are facing the utter extinction of humanity but at least we don’t have to pay taxes or wear seatbelts”.

  • skillissuer
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    instead of utter destruction of humanity, consider this scarier prospect: me needing to get a real job