With the rapid advances we’re currently seeing in generative AI, we’re also seeing a lot of concern for large scale misinformation. Any individual with sufficient technical knowledge can now spam a forum with lots of organic looking voices and generate photos to back them up. Has anyone given some thought on how we can combat this? If so, how do you think the solution should/could look? How do you personally decide whether you’re looking at a trustworthy source of information? Do you think your approach works, or are there still problems with it?
I think the basis solution is education: we need educate our children in critical thinking. Generative AI is only other one source of misinformation, like “pseudoscience” disguised as true science (false papers, manipulated data,…). It is not good that teenagers believe something is true only because it is in internet (blogs, youtube, etc)
I’m having trouble understanding why disinformation produced by an AI is more of a problem than that produced by a person. Sure, theoretically it can be made to scale a lot more–though I would point out AI is not, at the moment, light on resources either. But it’s unclear to me to what extent that makes a difference.
I don’t believe the content itself won’t be any more of an issue than human-generated misinformation. The main issue I see is that a single person can now achieve this on a large scale without ever leaving their mom’s basement and at a much lower cost. It’s the concentration of power that I find concerning.