Computer pioneer Alan Turing’s remarks in 1950 on the question, “Can machines think?” were misquoted, misinterpreted and morphed into the so-called “Turing Test”. The modern version says if you can’t tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like “thinking” and “intelligent” to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let’s put this new software to the Turing Test - by Grabthar’s Hammer, it passed! We’ve achieved Artificial Intelligence!

  • Kazumara
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    21 hours ago

    If anything, passing the Turing test would be a necessary condition, but never a sufficient one.

  • deranger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    7
    ·
    edit-2
    1 day ago

    I think the Chinese room argument published in 1980 gives a pretty convincing reason why the Turing test doesn’t demonstrate intelligence.

    The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or “thinking”.

    Searle asserts that there is no essential difference between the roles of the computer and the human in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, the human would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

    • 8baanknexer@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      21 hours ago

      I am sceptical of this thought experiment as it seems to imply that what goes on within the human brain is not computable. For reference: every single physical effect that we have thus far discovered can be computed/simulated on a Turing machine.

      The argument itself is also riddled with vagueness and handwaving: it gives no definition of understanding but presumes it as something that has a definite location, and also it may well be possible that taking the time to run the program inevitably causes understanding of Chinese after even the first word returned. Remember: executing these instructions could take billions of years for the presumably immortal human in the room, and we expect the human to be so thorough that they execute each of the trillions of instructions without error.

      Indeed, the Turing test is insufficient to test for intelligence, but the statement that the Chinese room argument tries to support is much, much stronger than that. It essentially argues that computers can’t be intelligent at all.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      5
      ·
      edit-2
      1 day ago

      The problem with the experiment is that there exists a set of instructions for which the ability to complete them necessitates understanding due to conditional dependence on the state in each iteration.

      In which case, only agents that can actually understand the state in the Chinese would be able to successfully continue.

      So it’s a great experiment for the solipsism of understanding as it relates to following pure functional operations, but not functions that have state changing side effects where future results depend on understanding the current state.

      There’s a pretty significant body of evidence by now that transformers can in fact ‘understand’ in this sense, from interpretability research around neural network features in SAE work, linear representations of world models starting with the Othello-GPT work, and the Skill-Mix work where GPT-4 and later models are beyond reasonable statistical chance at the level of complexity for being able to combine different skills without understanding them.

      If the models were just Markov chains (where prior state doesn’t impact current operation), the Chinese room is very applicable. But pretty much by definition transformer self-attention violates the Markov property.

      TL;DR: It’s a very obsolete thought experiment whose continued misapplication flies in the face of empirical evidence at least since around early 2023.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        1 day ago

        It was invalid when he originally proposed it because it assumes a unique mystical ability for the atoms that make up our brains. For Searle the atoms in our brain have a quality that cannot be duplicated by other atoms simply because they aren’t in what he recognizes as a human being.

        It’s why he claims the machine translation system system is incapable of understanding because the claim assumes it is possible.

        It’s self contradictory. He won’t consider it possible because it hasn’t been shown to be possible.

      • deranger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 day ago

        The Chinese room experiment only demonstrates how the Turing test isn’t valid. It’s got nothing to do with LLMs.

        I would be curious about that significant body of research though, if you’ve got a link to some papers.

        • DragonTypeWyvern@midwest.social
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 day ago

          No, it doesn’t render the Turing Test invalid, because the premise of the test is not to prove that machines are intelligent but to point out that if you can’t tell the difference you either must assume they are or risk becoming a monster.

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            21 hours ago

            Okay but in casual conversation I probably couldn’t spot a really good LLM on a thread like this, but on the back end that LLM is completely incapable of learning or changing in any meaningful way, its not quite a chinese room as previously mentioned but it’s still a set model that can’t learn or understand context, even with infinite context memory it could still only interact with that data within the confines of the original model.

            e.g. I can train the model to understand a spoon and a fork, it will never come up with that idea of a spork unless I re-train it to include the concept of sporks or directly tell it. Even after I tell it what a spork is it can’t infer the properties of a spork based on a fork or a spoon without additional leading prompts by me.

            • Blue_Morpho@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              17 hours ago

              even with infinite context memory

              Interestingly, infinite context memory is functionally identical to learning.

              It seems wildly different but it’s the same as if you have already learned absolutely everything that there is to know. There is absolutely nothing you could do or ask that the infinite context memory doesn’t already have stored response ready to go.

          • deranger@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 day ago

            The premise of the test is to determine if machines can think. The opening line of Turing’s paper is:

            I propose to consider the question, ‘Can machines think?’

            I believe the Chinese room argument demonstrates that the Turing test is not valid for determining if a machine has intelligence. The human in the Chinese room experiment is not thinking to generate their replies, they’re just following instructions - just like the computer. There is no comprehension of what’s being said.

    • eggymachus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 day ago

      That just shows a fundamental misunderstanding of levels. Neither the computer nor the human understands Chinese. Both the programs do, however.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        1 day ago

        The programs don’t really understand Chinese either. They are just filled with an understanding that is provided to them up-front. I mean as in they do not derive that understanding from something they perceive where there was no understanding before, they don’t draw conclusions, don’t understand words from context,… the way an intelligent being would learn a language.

        • eggymachus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          Others have provided better answers than mine, pointing out that the Chinese room argument only makes sense if your premise is that a “program” is qualitatively different from what goes on in a human brain/mind.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          22 hours ago

          Programs clearly understand words from context. Try making it do translation tasks, it can properly translate “tear” to either 泪水 (tears from crying) or 撕破 (to rend) based on context

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      Searle argued from his personal truth that a mystic soul is responsible for sapience.

      His argument against a computer system having consciousness is this:

      " In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing “system”, and does not require anything resembling the actual biology of the brain."

      -Searle

      https://en.m.wikipedia.org/wiki/Chinese_room

    • LovableSidekick@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 day ago

      Brilliant thought experiment. I never heard of it before. It does seem to describe what’s happening - if only there were a way to turn it into a meme so modern audiences could understand it.

      • Aatube@kbin.melroy.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        I mean it was featured in Zero Escape VLR, which is a pretty popular visual novel escape room, and used to help explain a major character.

  • gandalf_der_12te
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    18
    ·
    12 hours ago

    oh come on

    people are in denial that their way of life - getting paid for intellectual output - is coming to an end. it’s not the case that AI just produces slop. surely it does but so do a lot of humans. you know all the memes about human workers having imposter syndrome - feeling as if they don’t even really know what they’re doing? AI only has to produce higher quality output than them. and it definitely can.

    the reason why people shit on AI so hard is because they’re afraid - afraid that AI will “out-compete” them. in that sense, you could also call it “jealous”, like a woman fears she’s replaced by another woman.

    people need to respect themselves and others enough to agree to survive - and thrive, even - in the absence of a productive output. in other words, only if you can allow your fellow men (and women) a living income without work, you are truly in a position where you can live comfortably in the future.

    • tb_@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      10 hours ago

      I don’t entirely disagree with the comic at the end; but given the current systems in place I doubt the robots will be used to support the masses and rather enrich the few.

    • AngryRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 hours ago

      My dude, our billionaire overlords are pushing AI to save them money. They won’t be willing to pay for something like UBI. They spent over a fuckton of money in this last election to hand the presidency to someone who only cares about billionaires and their profits.

      • daddy32@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 hours ago

        You are both right. But the parent is exhibiting too much techno-optimism when it should be focusing on capitalism-pesimism instead.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      9 hours ago

      No idea why this is getting downvoted. You can argue over the exact practicality of the current iteration of AI, but this is a proven good take on automation generally speaking

      • kungen@feddit.nu
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Because they’re saying that people are afraid of AI taking their job, as if the majority of people enjoy their jobs? People don’t want to be without an income. As if our benevolent oligarchs will suddenly give us even the smallest chance of getting some kind of basic income?

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          as if the majority of people enjoy their jobs?

          The enshittification of employment isn’t necessary. And having a role in how your society functions is necessary for any kind of democratic control of the economy. You can’t just be a consumer, on the outside looking in.

          Automating away drudgery is generally good for an economy. Automating away control is what sucks.

          As if our benevolent oligarchs will suddenly give us even the smallest chance of getting some kind of basic income?

          The structures of basic income are already in place. We have social security. We have pensions. We have annuities. The struggle is in if and how we continue to fund them.

          Since Reagan, the answer to funding basic income schemes has been to displace the cost from higher income earners to younger workers. Now that we’ve drained that well, there’s definitely a push to simply dissolve these systems entirely.

          But it’s hardly a given, any more than the Reagan Era was some historical inevitability. Americans can change course if enough of them can unify around an opposition.

  • br3d@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    I can’t remember who said this, but somebody said the version of the Turing Test as we all remember it is ridiculous: It’s basically saying that the test of intelligence is “Can a chatbot fool one idiot?”

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 day ago

    Y’all might enjoy reading Blindsight. Really digs into questions of sapience, intelligence, etc. Is it evolutionary cost worth it? I’ve read it 15+ times. Because I’m a psycho.

    “You think we’re nothing but a Chinese Room,” Rorschach sneered. “Your mistake, Theseus.”

    And suddenly Rorschach snapped into view—no refractory composites, no profiles or simulations in false color. There it was at last, naked even to Human eyes.

    Imagine a crown of thorns, twisted, dark and unreflective, grown too thickly tangled to ever rest on any human head. Put it in orbit around a failed star whose own reflected half-light does little more than throw its satellites into silhouette. Occasional bloody highlights glinted like dim embers from its twists and crannies; they only emphasized the darkness everywhere else.

    Imagine an artefact that embodies the very notion of torture, something so wrenched and disfigured that even across uncounted lightyears and unimaginable differences in biology and outlook, you can’t help but feel that somehow, the structure itself is in pain.

    Now make it the size of a city.

    • inconel@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      Glad to see mentions to Peter Watts. His view of humanity is dry and take on real world is even grimmer, but it’s intriguing and backed by science. Also I’m the one of people dying to know what he said at the end of his lecture.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      This is the book that introduced me to the Chinese Room thought experiment and is the first thing I began to think of when the recent AI trend started to make a splash.

      Peter Watts is great and though it’s not related to the topic at hand, I cannot recommend Starfish enough. Dark, haunting, and psychological. (It’s apparently part of a series but I never carried on)

  • seven_phone@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    I always saw it more as pragmatism relating to humanity and being possibly extended to machine intelligence by association. When you talk with another person you have no real way of knowing that they are separate conscious entities, intelligent and self aware in the way you perceive yourself to be. But if they talk and act in a way that is suggestive of that then the best and simplest working practice is to assume it. This same practicality should extend to include artificial intelligence as applicable.

    • LovableSidekick@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 day ago

      Yes I think that’s generally what Alan Turing meant - he was careful not to define what “intelligence” means, and was discussing practical perception of machine behavior.

  • cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    The Turing Test codified the very real fact that computer AI systems up till a few years ago couldn’t hold a conversation (outside of special conversational tricks like Eliza and Cleverbot). Deep neural networks and the attention mechanism changed the situation; it’s not a completely solved problem, but the improvement is undeniably dramatic. It’s now possible to treat chatbots as a rudimentary research assistant, for example.

    It’s just something we have to take in stride, like computers becoming capable of playing Chess or Go. There is no need to get hung up on the word “intelligence”.

    • LovableSidekick@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      Not sure how you define getting “hung up” but there are tons of poorly informed people who believe/fear that AI is about to take over/conquer/destroy/whatever the world because they think LLMs are as smart as humans - or just a few tweaks away. It’s less about the word “intelligence” than about jumping from there to collateral issues, like thinking LLMs are “persons” that deserve rights, that using them without their consent is slavery, and other nonsense. Manipulative people take advantage of this kind of ignorance. Knowledge is good, modern superstition is bad.

      • VintageGenious@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        They are going to destroy the world, not because they are superintelligent but because LLMs will be linked to lethal weapons and critical machines since it’s easier to learn than a human and since they are very not reliant (prompt injection, purposely lying, etc.), this will lead to death

    • lemmeBe@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      It’s just something we have to take in stride, like computers becoming capable of playing Chess or Go. There is no need to get hung up on the word “intelligence”.

      Nicely said.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    The Turing Test as it is popularly conceptualized is really more of a test of human intelligence (or stupidity, more likely) rather than that of the machine.

    If you put a big enough idiot in front of the screen, Dr. Sbaitso could conceivably “pass.” Well, maybe if you muted it, anyway.

  • ristoril_zip@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 day ago

    Well back when computers were being developed/ improved there was a pretty strong commitment throughout the Western nations to advancing and expanding education for everyone.

    In that paradigm, people would become more educated and better at critical thinking at a steady pace, probably on par with the rate at which computer programs advanced in their capacity to mimic human behavior.

    So, “can it fool more people into believing it’s a human” would’ve been a great test of whether the program was super advanced.

    Instead we’ve had 50 years of attacks on public education by Republicans that has been tolerated - or at least not fought hard enough - by Democrats. So not particularly advanced programs can fool a great many people. That does make the Turing Test moot, I think.

    • LovableSidekick@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      My point was only that the Turing Test was not invented by Alan Turing, it was made up based on misunderstood remarks he made. But more than that, the principle is the same as saying a convincing sales pitch means a good product.

  • Nougat@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    It’s on par with a thought experiment. The notion is still one worth considering, even if it is impractical or unsuitable to create a literal test based on it.