Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • Fedizen@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    5 months ago

    on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

    The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

    Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

    • 0ops@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I personally wouldn’t consider a neutral network an algorithm, as chance is a huge factor: whether you’re training or evaluating you’ll never get quite the same results

    • PrinceWith999Enemies@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

      My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

      So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

      • Fedizen@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        5 months ago

        What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

        • PrinceWith999Enemies@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          5 months ago

          Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

          • Fedizen@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            5 months ago

            It’s the ‘why’. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I’d argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

            Everything we call “AI” now should be called “EI” or “extended intelligence” because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

            • PrinceWith999Enemies@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              5 months ago

              Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

              But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

              Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

              Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.