For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • Borg286@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    AI doesn’t really exist yet. Media, back in 1870, called Tesla’s magnetically controlled boat artificial intelligence, and again in the 80s when computer scientists invented the game of life. But even now nothing we’ve made so far can do decision making. ChatGPT, the smartest out there, is really just a versatile prediction engine.

    Imagine if I said, “once upon a” and asked you to come up with the next word, you’d say, “time” as you’ve heard that phrase hundreds of times. I then asked you to come up with the next word, and the next you might start telling me about a princess locked in a tall tower protected by a dragon. These are all stereotypical elements of a “once upon a time” story. Nothing creative, just typical. Chat GPT has just read way more than you or I ever could and is really good at knowing more stereotypical stories and mixing them together. There is no “what is best for humanity” only “once upon a time…”-made up stories.

    • RupeThereItIs@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      What your saying doesn’t exist is an Artificial General Intelligence, something approaching the conscious human mind. Your right that doesn’t exist.

      AI doesn’t just mean that though.

      What we’re dealing with right now is the computer equivalent of growing mouse brain cells in a petre dish, plugging them into inputs and outputs & getting them to do useful things for us.

      The way you describe chat GPT not being creative, is also theoretically how our own brains work in the creative processes. If you study story structure & mythology you’d find that ALL successful stories boil down to a very minimalist set of archetypes & types of conflict.

      • Kichae@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        What we’re dealing with is randomly choosing options from a weighted distribution. The only thing intelligent about that is what you’ve chosen as the data set to generate that distribution.

        And that intelligence lies outside of the machine.

        There’s really no need to buy into tech bros delusions of grandeur about this stuff.

  • Otome-chan@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    AI currently doesn’t “understand” or “know” anything. It’s trained on a collection of text, and then predicts and extends the text prompt you give it. It’s very good at doing this. If someone “creates something new” the trained AI will have no concept of it, unless you train a new ai model that includes text about that thing.

    • s804@kbin.socialOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Oh wow it is really interesting that new things will be unknown! So basically AI still isn’t intelligence because it can’t really make choices on its own, just based on what it has learned.

      • Otome-chan@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        well it can “make choices” in the sense of having it predict a decision that someone might make. but it’s not really thinking about things on it’s own trying to figure it out, it’s just extending the text.

        For example, say you ask it: “should we ban abortion?” now, it’s not actually thinking on it’s own, so it’ll go “what’s the most likely response to this?” and give that. But if you go: “respond as a pro-life republican, should we ban abortion” the same ai model will respond “yes”, but if you then go “respond as a pro-choice democrat, should we ban abortion” and it’ll respond “no”.

        Basically it’s not thinking at all, but rather just extending the text you give it (which would include a response to the question). We can try prompting it as some all knowing being, but it’ll just inherently have biases depending on the exact nature of the prompting, as well as the dataset. It’s not reasoning things out on it’s own.

        So if you ask it something it doesn’t know, it’ll just spit out garbage. You could try explaining the new thing in your prompt, at which point it’d respond the most likely text which may or may not be a good answer. In practice a new model would just be trained with the included topic, and it’d be the same as before: your prompt would determine the output of the ai.

        Basically, it’s not deciding things; it’s just giving you the most likely continuation of the text. and in that sense, you can completely control the type of answers it gives. if you want the ai to be a flat earther who thinks murder is right, you can do that.

        • Flaky_Fish69@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          It’s not even making decisions. It’s following instructions.

          Chat gpt’s instructions are very advanced, but the decisions have already been made. It follows the prompt and it’s reference material to provide the most common response.

          It’s like a kid building a Lego kit- the kid isn’t deciding where pieces go, just following instructions.

          Similarly, between the prompt, the training and the very careful instructions in how to train, and instructions that limit objectionable responses…. All it’s doing is following instructions already defined.

        • CoderKat@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          The example you give is also a big concern with how modern AI is very susceptible to leading questions. It’s very easy to get the answer you want by leading it on. That makes it a potential misinformation machine.

          Adversarial testing can help reduce this, but it’s an uphill battle to train an AI faster than people get mislead by it.

        • dedale@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Then again, most humans conception of right and wrong depends on context, not on a coherent morality framework.

            • dedale@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              I mean most of the time we act based on what we perceive to be socially acceptable, not by following an ethical law gained through our own experience.
              If you move people to a different social environment, they’ll adapt to fit unless actively discouraged.
              The social context is the AI prompt.
              We rarely decide, make choices, or reflect about anything, we regurgitate our training data based on our prompts.

              • Maeve@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Excellent, thank you! I’m wondering if something was lost in translation or my interpretation. When I think “context,” I consider something along the lines of: “Water is good.”

                Is it good for a person drowning? What if it’s contaminated? What about during a hurricane/typhoon? And so forth.

                • dedale@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Yeah sorry about that, sometimes thing that feel evident in my head are anything but when written.
                  And translation adds a layer of possible confusion.
                  I’d rather drown in clean water given a choice.

        • Pisodeuorrior@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Really well put, I wish we stopped calling it “artificial intelligence” and pick something more descriptive of what actually happens.

          Right now it’s closer to a parrot trained to say “this guy” when asked “who’s a good boy”.

      • RupeThereItIs@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Now lets really break your brain, are you & I able to make our own choices? Is the ego, the voice in our own skulls, the conscious mind really ever making any decisions?

        There are a great many studies that seem to indicate decisions are made well before our conscious selves are aware of them.

        We are far more driven by emotion & instinct then any of us care to admit.

  • Kissaki@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    AI learns from the data it is given. There is no inherent understanding to it.

    For a text based AI:

    1. You feed the AI with text. The AI internalizes that text. (Remembers it. Learns it.)
    2. You give feedback to the AI, what kind of responses you like from it and what you don’t. (You train it to behave the way you want.)

    The AI does not inherently understand anything. But it will behave the way you trained it to, to the degree you trained it, and with all the imperfections you trained it with (e.g. prejudices).

  • FerrahWolfeh@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    It really doesn’t. In simple terms, AI will only avoid talking about certain subjects because the data they used to teach the AI says it’s bad and shows how the AI should act accordingly to the scenarios provided in that data.

      • nLuLukna @sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Well you do the same don’t you. You know not to scream loudly in public because the data that you reviecied when you were younger tells you that it’s a mistake.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          This is what I find funny about this thread. People are trying so hard to justify it NOT being AI by breaking its actions down like this, while forgetting that WE learn the exact same way.

          You could even say that WE aren’t even making conscious decisions. Every decision we make is weighed against past experiences and other stimuli. “Consciousness” is the brain lying to itself to make it seem like it has free will.

          • PetePie@kbin.social
            cake
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’m perplexed why majority of programmers on social medias share the same opinion about AIs which is opposite to what all AI researchers, scientists and top AI engineers believe, not only they seem to think that they know how LLM think but they also know exactly what consciousness is.

  • gonzo0815@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Well that’s the thing. It can’t distinguish right from wrong. I found this Video quite insightful when it comes to the question how we are supposed to train an AI to make ethically correct decisions.

    I think some rules can be hard-coded to an AI, but there are a lot of situations where it’s not even clear for us humans what the correct decision could be.