It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    30
    arrow-down
    5
    ·
    7 个月前

    OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care.

    Wow. Where are all the news stories about THIS?

    • vrighter
      link
      fedilink
      arrow-up
      51
      arrow-down
      2
      ·
      7 个月前

      If you try to start learning how they work, the first thing you realize is that hallucinations are fundamental to how the technology works. Of course they are unfixable. That’s literally how they work.

      They’re broken clocks that happen to be right more than just twice a day, but still broken nonetheless.

      • guyrocket@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        7 个月前

        That article explains the issues well and clearly. Thanks for sharing.

        I think it should be shared more broadly.