• Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    11
    ·
    6 months ago

    Nah. Turing skipped this matter altogether. In fact, it’s the main point of the Turing test aka imitation game:

    I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms 'machine 'and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

    In other words what’s Turing is saying is “who cares if they think? Focus on their behaviour dammit, do they behave intelligently?”. And consciousness is intrinsically tied to thinking, so… yeah.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      But what does it mean to behave intelligently? Cleary it’s not enough to simply have the ability to string together coherent sentences, regardless of complexity, because I’d say the current crop of LLMs has solved that one quite well. Yet their behavior clearly isn’t all that intelligent, because they will often either misinterpret the question or even make up complete nonsense. And perhaps that’s still good enough in order to fool over half of the population, which might be good enough to prove “intelligence” in a statistical sense, but all you gotta do is try to have a conversation that involves feelings or requires coming up with a genuine insight in order to see that you’re just talking to a machine after all.

      Basically, current LLMs kinda feel like you’re talking to an intelligent but extremely autistic human being that is incapable or afraid to take any sort of moral or emotional position at all.

      • areyouevenreal@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Basically, current LLMs kinda feel like you’re talking to an intelligent but extremely autistic human being that is incapable or afraid to take any sort of moral or emotional position at all.

        Except AIs are able to have political opinions and have a clear liberal bias. They are also capable of showing moral positions when asked about things like people using AI to cheat and about academic integrity.

        Also you haven’t met enough autistic people. We aren’t all like that.

        • MacN'Cheezus@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Except AIs are able to have political opinions and have a clear liberal bias. They are also capable of showing moral positions when asked about things like people using AI to cheat and about academic integrity.

          Yes, because they have been trained that way. Try arguing them out of these positions, they’ll eventually just short circuit and admit they’re a large language model incapable of holding such opinions, or they’ll start repeating themselves because they lack the ability to re-evaluate their fundamental values based on new information.

          Current LLMs only learn from the data they’ve been trained on. All of their knowledge is fixed and immutable. Unlike actual humans, they cannot change their minds based on the conversations they have. Also, unless you provide the context of your previous conversations, they do not remember you either, and they have no ability to love or hate you (or really have any feelings whatsoever).

          Also you haven’t met enough autistic people. We aren’t all like that.

          I apologize, I did not mean to offend any actual autistic people with that. It’s more like a caricature of what people who never met anyone with autism think autistic people are like because they’ve watched Rain Man once.

          • areyouevenreal@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            Yes, because they have been trained that way. Try arguing them out of these positions, they’ll eventually just short circuit and admit they’re a large language model incapable of holding such opinions, or they’ll start repeating themselves because they lack the ability to re-evaluate their fundamental values based on new information.

            You’re imagining an average person would change their opinions based on a conversation with a single person. In reality people rarely change their strongly held opinions on something based on a single conversation. It takes multiple people normally expressing opinion, people they care about. It happens regularly that a society as a whole can change it’s opinion on something and people still refuse to move their position. LLMs are actually capable of admitting they are wrong, not everyone is.

            Current LLMs only learn from the data they’ve been trained on. All of their knowledge is fixed and immutable. Unlike actual humans, they cannot change their minds based on the conversations they have. Also, unless you provide the context of your previous conversations, they do not remember you either, and they have no ability to love or hate you (or really have any feelings whatsoever).

            Depends on the model and company. Some ML models are either continuous learning, or they are periodically retrained on interactions they have had in the field. So yes some models are capable of learning from you, though it might not happen immediately. LLMs in particular I am not sure about, but I don’t think there is anything stopping you from training them this way. I actually think this isn’t a terrible model for mimicking human learning, as we tend to learn the most when we are sleeping, and take into consideration more than a single interaction.

            I apologize, I did not mean to offend any actual autistic people with that. It’s more like a caricature of what people who never met anyone with autism think autistic people are like because they’ve watched Rain Man once.

            Then why did you say it if you know it’s a caricature? You’re helping to reinforce harmful stereotypes here. There are plenty of autistic people with very strongly held moral and emotional positions. In fact a strong sense of justice as well as black and white thinking are both indicative of autism.

            • MacN'Cheezus@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              You’re imagining an average person would change their opinions based on a conversation with a single person. In reality people rarely change their strongly held opinions on something based on a single conversation. It takes multiple people normally expressing opinion, people they care about. It happens regularly that a society as a whole can change it’s opinion on something and people still refuse to move their position.

              No, I am under no illusion about that. I’ve met many such people, and yes, they are mostly driven by herd mentality. In other words, they’re NPCs, and LLMs are in fact perhaps even a relatively good approximation of what their though processes are like. An actual thinking person, however, can certainly be convinced to change their mind based on a single conversation, if you provide good enough reasoning and sufficient evidence for your claims.

              LLMs are actually capable of admitting they are wrong, not everyone is.

              That’s because LLMs don’t have any feelings about being wrong. But once your conversation is over, unless the data is being fed back into the training process, they’ll simply forget the entire conversation ever happened and continue arguing from their initial premises.

              So yes some models are capable of learning from you, though it might not happen immediately. LLMs in particular I am not sure about, but I don’t think there is anything stopping you from training them this way. I actually think this isn’t a terrible model for mimicking human learning, as we tend to learn the most when we are sleeping, and take into consideration more than a single interaction.

              As far as I understand the process, there is indeed nothing that would prevent the maintainers from collecting conversations and feeding them back into the training data to produce the next iteration. And yes, I suppose that would be a fairly good approximation of how humans learn – except that in humans, this happens autonomously, whereas in the case LLMs, I suppose it would require a manual structuring of the data that’s being fed back (although it might be interesting to see what happens if we give an AI the ability to let it decide for itself how it wants to incorporate the new data).

              Then why did you say it if you know it’s a caricature? You’re helping to reinforce harmful stereotypes here.

              Because I’m only human and therefore lazy and it’s simply faster and more convenient to give a vague approximation of what I intended to say, and I can always follow it up with a clarification (and an apology, if necessary) in case of a misunderstanding. Also, it’s often simply impossible to consider all potential consequences of my words in advance.

              There are plenty of autistic people with very strongly held moral and emotional positions. In fact a strong sense of justice as well as black and white thinking are both indicative of autism.

              I apologize in advance for saying this, but now you ARE acting autistic. Because instead of giving me the benefit of the doubt and assuming that perhaps I WAS being honest and forthright with my apology, you are doubling down on being right to condemn me for my words. And isn’t that doing exactly the same thing you are accusing me of? Because now YOU’re making a caricature of me by ignoring the fact that I DID apologize and provide clarification, but you present that caricature as the truth instead.

              • areyouevenreal@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                The first half of this comment is pretty reasonable and I agree with you on most of it.

                I can’t overlook the rest though.

                I apologize in advance for saying this, but now you ARE acting autistic. Because instead of giving me the benefit of the doubt and assuming that perhaps I WAS being honest and forthright with my apology, you are doubling down on being right to condemn me for my words. And isn’t that doing exactly the same thing you are accusing me of? Because now YOU’re making a caricature of me by ignoring the fact that I DID apologize and provide clarification, but you present that caricature as the truth instead.

                So would it be okay if I said something like “AI is behaving like someone who is extremely smart but because they are a woman they can’t hold real moral or emotional positions”? Do you think a simple apology that doesn’t show you have learned anything at all would be good enough? I was trying to explain why what you said is actually wrong, dangerous, and trying to be polite about it, but then you double down anyway. Imagine if I tried to defend the above statement with “I apologize in advance but NOW you ARE acting like a woman”. Same concept with race, sexuality, and so on. You clearly have a prejudice about autistic people (and possibly disabled people in general) that you keep running into.

                Like bro actually think about what you are saying. The least you could have done is gone back and edited your original comment, and promised to do better. Not making excuses for perpetuating harmful misinformation while leaving up your first comment to keep spreading it.

                I didn’t say you were being malicious or ignoring your apology. You were being ignorant though and now stubborn to boot. When you perpetuate both prejudice and misinformation you have to do more than give a quick apology and expect it to be over; you need to show your willingness to both listen and learn and you have done the opposite. All people are the products of their environment and abelism is one of the least recognized forms of discrimination. Even well meaning people regularly run into it, and I am hoping you are one of these.

                • MacN'Cheezus@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  6 months ago

                  I regret that we are now having to spend our time on this when we otherwise had an interesting and productive conversation, but I can’t let that stand.

                  I was trying to explain why what you said is actually wrong, dangerous, and trying to be polite about it, but then you double down anyway.

                  You did not explain much at all, you just accused me of spreading harmful and dangerous stereotypes. What little explanation you did give (black and white thinking and strongly held beliefs), you immediately put into action by assuming the worst of me. My doubling down was therefore not based on bigotry or prejudice, but empirically observable facts (namely your actual behavior).

                  Like bro actually think about what you are saying. The least you could have done is gone back and edited your original comment, and promised to do better. Not making excuses for perpetuating harmful misinformation while leaving up your first comment to keep spreading it.

                  No, I’m not going to edit anything because I can live with the fact that I’ve made a mistake, for which I have offered both a correction and an apology, and I will respectfully and politely ask you once again to please accept it.

                  All I did in addition was point out that you’re not helping your own cause by exhibiting the very behavior you are blaming me for wrongly alleging. It’s like trying to teach someone that violence isn’t the way to solve your problems by beating them up.

                  You’re being ridiculous and unreasonable right now. Please stop it.

                  • areyouevenreal@lemm.ee
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    6 months ago

                    All I did in addition was point out that you’re not helping your own cause by exhibiting the very behavior you are blaming me for wrongly alleging.

                    You said autistic people can’t have strong emotional or moral positions. I am doing both by arguing with you. Logic 101.

                    All I did in addition was point out that you’re not helping your own cause by exhibiting the very behavior you are blaming me for wrongly alleging. It’s like trying to teach someone that violence isn’t the way to solve your problems by beating them up.

                    What behavior? If I was exhibiting the behavior of not having strong morals or emotions I wouldn’t still be doing this. In fact I am displaying the exact opposite of the behavior you are talking about.

                    At first I thought you were just slightly ignorant through no fault of your own. Now I am beginning to think you are being intentionally obtuse or just straight up trolling. Unless this is some sort of test. Do you think I am like ChatGPT? Is that what this is?