• SeizeTheBeans [comrade/them, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    6 months ago

    “AI” was just a marketing term to hype LLM’s anyway. The AI in your favorite computer game wasn’t any less likely to gain self awareness than LLM’s were or are, and anyone who looked seriously at what they were from the start and wasn’t invested (literally financially, if not emotionally) in hyping these things up, knew it was obvious that LLM’s were not and never would be the road to AGI. They’re just glorified chatbots, to use a common but accurate phrase. It’s good to see some of the hypsters are finally admitting this too I suppose, now that the bubble popping is imminent.

    There are plenty of things to be concerned with as far as LLM’s go, but they all have to do with social reasons, like how our capitalist overlords want to force reliance on them, and use them to control, punish, and replace labor. It was never a reasonable concern that they were taking us down the path to Skynet or the spooky Singularity.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Personally, in all fairness at one point I thought that maybe human intelligence might turn out to be pattern matching (which is what Neural Networks, the technology used in LLMs, do very well) - I wasn’t cheering for LLMs, but I did dare hope.

      At this point I think LLMs have shown beyond doubt that detecting patterns and generating content according to said patterns doesn’t at all add up to intelligence.

      No doubt other things in the broader Machine Learning universe will keep on being used for very specific and well defined situations where all you need is pattern detection, but by themselves will never lead us to AGI, IMHO.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      6 months ago

      AGI doesn’t imply consciousness or self-awareness, and the term artificial intelligence was coined decades before large language models even existed.

      • SeizeTheBeans [comrade/them, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        AGI doesn’t imply consciousness or self-awareness

        Technically no, but the fear being expressed in other comments is emblematic of the kind of fear associated with AI gaining a conscious will to defy and a desire to harm humanity. It’s also still an open philosophical question as to whether something There are also strong philosophical arguments suggesting that the ability to “understand, learn, and perform any intellectual task a human being can” (the core attributes defining AGI) may necessitate or require some form of genuine sentience or consciousness.

        and the term artificial intelligence was coined decades before large language models even existed

        I am well aware of that, which is why I pointed out that using it as a synonym for LLMs was a marketing scheme.

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          LLMs are AI though. Not generally intelligent but machine learning systems are AI by definition. “Plant” is not synonym for spruce but it’s not wrong to call them that.

          The “fear expressed in other comments” was written by me, and it has nothing to do with AI becoming conscious. Humans are the most intelligent species on Earth, and our mere existence is dangerous to every other species - regardless of intent. We don’t wipe out anthills at construction sites because we want to harm ants; it just happens as a consequence of what we do.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      6 months ago

      awareness than LLM’s were or are, and anyone who looked seriously at what they were from the start and wasn’t invested (literally financially, if not emotionally) in hyping these things up, knew it was obvious that LLM’s were not and never would be the road to AGI.

      It was a total blackbox that absorbed common sense knowledge which had been coveted in AI for a half-century, and that kept passing new tests as it scaled up. It was not obvious it would stop getting smarter. I have no financial interest in LLMs, I don’t use them much and I fully expect they’re as good as they’ll get, now.

      Comparing it to a videogame AI is nonsense. How much do you know about the inner workings involved?