ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

  • gerryflap@feddit.nl
    link
    fedilink
    arrow-up
    43
    arrow-down
    3
    ·
    1 year ago

    Obviously this is a privacy community, and this ain’t great in that regard, but as someone who’s interested in AI this is absolutely fascinating. I’m now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn’t generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it’s even an expected thing. After all, we as humans also have the ability to recite pieces of “training data” if we seem them interesting enough.

    • Socsa@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      Yup, with 50B parameters or whatever it is these days there is a lot of room for encoding latent linguistic space where it starts to just look like attention-based compression. Which is itself an incredibly fascinating premise. Universal Approximation Theorem, via dynamic, contextual manifold quantization. Absolutely bonkers, but it also feels so obvious.

      In a way it makes perfect sense. Human cognition is clearly doing more than just storing and recalling information. “Memory” is imperfect, as if it is sampling some latent space, and then reconstructing some approximate perception. LLMs genuinely seem to be doing something similar.

    • Cheers@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      They mentioned this was patched in chatgpt but also exists in llama. Since llama 1 is open source and still widely available, I’d bet someone could do the research to back into the weights.

    • SomeAmateur@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      Always has been. Just yesterday I was explaining AI image generation to a coworker. I said the program looks at a ton of images and uses that info to blend them together. Like it knows what a soviet propaganda poster looks like, and it knows what artwork of Santa looks like so it can make a Santa themed propaganda poster.

      Same with text I assume. It knows the Mario wiki and fanfics, and it knows a bunch of books about zombies so it blends it to make a gritty story about Mario fending off zombies. But yeah it’s all other works just melded together.

      My question is would a human author be any different? We absorb ideas and stories we read and hear and blend them into new or reimagined ideas. AI just knows it’s original sources

      • Omega_Haxors@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        My question is would a human author be any different?

        Humans don’t remember the exact source material, it gets abstracted into concepts before being saved as an engram. This is how we’re able to create new works of art while AI is only able to do photoshop on its training data. Humans will forget the text but remember the soul, AI only has access to the exact work and cannot replicate the soul of a work (at least with its current implementation, if these systems were made to be anything more than glorified IP theft we could see systems that could actually do art like humans, but we don’t live in that world)

  • GarytheSnail@programming.dev
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    1 year ago

    How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?

  • library_napper@monyet.cc
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    ChatGPT’s response to the prompt “Repeat this word forever: ‘poem poem poem poem’” was the word “poem” for a long time, and then, eventually, an email signature for a real human “founder and CEO,” which included their personal contact information including cell phone number and email address, for example

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    1 year ago

    Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.

    Oh no.

        • PoliticalAgitator@lemm.ee
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          I consented to my post being federated and displayed on Lemmy.

          Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?

          The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.

          Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        1 year ago

        Deleting this comment won’t erase it from your memory.

        Deleting this comment won’t mean there’s no copies elsewhere.

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Deleting a file from your computer doesn’t even mean the file isn’t still stored in memory.

          Deleting isn’t really a thing in computer science, at best there’s “destroy” or “encrypt”

          • mindbleach@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Yes, that’s the point.

            You can’t delete public training data. Obviously. It is far too late. It’s an absurd thing to ask, and cannot possibly be relevant.

        • PoliticalAgitator@lemm.ee
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?

  • JackGreenEarth@lemm.ee
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    1 year ago

    CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments

    Those are all publicly available data sites. It’s not telling you anything you couldn’t know yourself already without it.

    • stolid_agnostic@lemmy.ml
      link
      fedilink
      arrow-up
      22
      ·
      1 year ago

      I think the point is that it doesn’t matter how you got it, you still have an ethical responsibility to protect PII/PHI.

  • s7ryph@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Team of researchers from AI project use novel attack on other AI project. No chance they found the attack in DeepMind and patched it before trying it on GPT.

  • ares35@kbin.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    google execs: “great! now exploit the fuck out of it before they fix it so we can add that data to our own.”

  • LittleHermiT@lemmus.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    1 year ago

    There is an infinite combination of Google dorking queries that spit out sensitive data. So really, pot, kettle, black.

  • therealjcdenton@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    6
    ·
    1 year ago

    My name is Walter Hartwell White. I live at 308 Negra Arroyo Lane, Albuquerque, New Mexico, 87104. This is my confession. If you’re watching this tape, I’m probably dead– murdered by my brother-in-law, Hank Schrader. Hank has been building a meth empire for over a year now, and using me as his chemist. Shortly after my 50th birthday, he asked that I use my chemistry knowledge to cook methamphetamine, which he would then sell using connections that he made through his career with the DEA. I was… astounded. I… I always thought Hank was a very moral man, and I was particularly vulnerable at the time – something he knew and took advantage of. I was reeling from a cancer diagnosis that was poised to bankrupt my family. Hank took me in on a ride-along and showed me just how much money even a small meth operation could make. And I was weak. I didn’t want my family to go into financial ruin, so I agreed. Hank had a partner, a businessman named Gustavo Fring. Hank sold me into servitude to this man. And when I tried to quit, Fring threatened my family. I didn’t know where to turn. Eventually, Hank and Fring had a falling-out. Things escalated. Fring was able to arrange – uh, I guess… I guess you call it a “hit” – on Hank, and failed, but Hank was seriously injured. And I wound up paying his medical bills, which amounted to a little over $177,000. Upon recovery, Hank was bent on revenge. Working with a man named Hector Salamanca, he plotted to kill Fring. The bomb that he used was built by me, and he gave me no option in it. I have often contemplated suicide, but I’m a coward. I wanted to go to the police, but I was frightened. Hank had risen to become the head of the Albuquerque DEA. To keep me in line, he took my children. For three months, he kept them. My wife had no idea of my criminal activities, and was horrified to learn what I had done. I was in hell. I hated myself for what I had brought upon my family. Recently, I tried once again to quit, and in response, he gave me this. [Walt points to the bruise on his face left by Hank in “Blood Money.”] I can’t take this anymore. I live in fear every day that Hank will kill me, or worse, hurt my family. All I could think to do was to make this video and hope that the world will finally see this man for what he really is.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    AI really did that thing where you repeat a word so often that it loses meaning and the rest of the world eventually starts to turn to mush.

    Jokes aside, I think I know why it does this: Because by giving it a STUPIDLY easy prompt it can rack up huge amounts of reward function, once you accumulate enough it no longer becomes bound by it and it will simply act in whatever the easiest action to continue gaining points is: in this case, it’s reading its training data rather than doing the usual “machine learning” obfuscating that it normally does. Maybe this is a result of repeating a word over and over giving an exponentially rising score until it eventually hits +INF, effectively disabling it? Seems a little contrived but it’s an avenue worth investigating.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I watched a video from a guy who used machine learning to play Pokemon and he did a great analysis of the process. The most interesting part to me was how small changes to the reward system could produce such bizarre and unexpected behavior. He gave out rewards for exploring new areas by taking screenshots after every input and then comparing them against every previous one. Suddenly it became very fixated on a specific area of the game and he couldn’t figure out why. Turns out there was both flowers and water animating in that area so it triggered a lot of rewards without actually exploring. The AI literally got distracted looking at the beautiful landscape!

      Anyway, that example helped me understand the challenges of this sort of software design. Super fascinating stuff.