ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.
Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.
“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”
Text engine trained on publicly-available text may contain snippets of that text. Which is publicly-available. Which is how the engine was trained on it, in the first place.
Oh no.
Now delete your posts from ChatGPTs memory.
Delete that comment you just posted from every Lemmy instance it was federated to.
I consented to my post being federated and displayed on Lemmy.
Did writers and artists consent to having their work fed into a privately controlled system that didn’t exist when they made their post, so that it could make other people millions of dollars by ripping off their work?
The reality is that none of these models would be viable if they requested permission, paid for licensing or stuck to work that was clearly licensed.
Fortunately for women everywhere, nobody outside of AI arguments considers consent, once granted, to be both unrevokable and valid for any act for the rest of time.
Deleting this comment won’t erase it from your memory.
Deleting this comment won’t mean there’s no copies elsewhere.
Deleting a file from your computer doesn’t even mean the file isn’t still stored in memory.
Deleting isn’t really a thing in computer science, at best there’s “destroy” or “encrypt”
Yes, that’s the point.
You can’t delete public training data. Obviously. It is far too late. It’s an absurd thing to ask, and cannot possibly be relevant.
And to be logically consistent, do you also shame people for trying to remove things like child pornography, pornographic photos posted without consent or leaked personal details from the internet?
Or maybe folks should think before putting something into the world they can’t control?
Yeah it’s their fault for daring to communicate online without first considering a technology that didn’t exist.
Sooner or later these models will be trained with breached data, accidentally or otherwise.
deleted by creator
User name checks out