I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”

I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.

Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.

And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.

  • Le_Wokisme [they/them, undecided]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    in principle we could accidentally make a non-human intelligence because our (animals, even) way of having brains made of meat and electrochemistry probably isn’t the only formulation for it, but the odds of that are incomprehensibly small and LLMs aren’t any more of a precursor to that than any other tech thing we have.

    • darkmode [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I couched that whole comment on the human thing bc the current sales pitch is that this stuff is replacing people. Could more powerful computers and some kind of theoretical algorithm surpass LLMs? Why not? But i don’t think that animals operate based on pure statistics and linear algebra

      • Le_Wokisme [they/them, undecided]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        sorry i meant that a machine intelligence doesn’t have to be a copy of us, which means we don’t strictly need a complete understanding of our own minds to create one, not that we could make a robot animal with pure math.