When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • vrighter
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      2 hours ago

      This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

      • vrighter
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 minutes ago

        yes it is, and it doesn’t work.

        edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)