For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • FerrahWolfeh@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    It really doesn’t. In simple terms, AI will only avoid talking about certain subjects because the data they used to teach the AI says it’s bad and shows how the AI should act accordingly to the scenarios provided in that data.

      • nLuLukna @sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Well you do the same don’t you. You know not to scream loudly in public because the data that you reviecied when you were younger tells you that it’s a mistake.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          This is what I find funny about this thread. People are trying so hard to justify it NOT being AI by breaking its actions down like this, while forgetting that WE learn the exact same way.

          You could even say that WE aren’t even making conscious decisions. Every decision we make is weighed against past experiences and other stimuli. “Consciousness” is the brain lying to itself to make it seem like it has free will.

          • PetePie@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’m perplexed why majority of programmers on social medias share the same opinion about AIs which is opposite to what all AI researchers, scientists and top AI engineers believe, not only they seem to think that they know how LLM think but they also know exactly what consciousness is.