Images depicting war-torn Ukraine are being generated by AI services, sold on stock photo websites and used in media coverage of the conflict.
Images depicting war-torn Ukraine are being generated by AI services, sold on stock photo websites and used in media coverage of the conflict.
I don’t think this is a realistic proposal - this is a technological advancements. You might be able to force companies to put invisible steganographic signatures in their services’ output, maybe provide some method for hashing the output to provide a way to determine if an image was generated by them…
But what’s stopping them from using the underlying model on the side, off the books. They could sell/leak the model to external entities. If they just generate outputs without any watermarks, those systems won’t be able to detect them, potentially only lending more legitimacy to those fakes.
And, ultimately, nothing’s stopping independent organizations from developing their own models capable of generating such fakes. What help is it that big companies are limited, if the technology needed to generate images is already known, and might end up easily reproducible by anybody sooner than later?
That said, individual instances of such illegal/immoral services should be dealt with - it’s horrible, but I believe those are inevitable. Pandora’s box has been opened by creating the technology, it was going to happen sooner or later, and we have to deal with the results.
Yeah, I tried to get that across with my phrasing… I’m not saying we need to change the technology. I mean it’s out there and it’s too late anyways. Plus it’s a tool, and tools can be used for various purposes, and that’s not the tool’s fault. I’m also not arguing to change how kitchen knifes, axes, etc work, despite them having potential to do harm…
But: It doesn’t need to be 100% waterproof or we can’t do anything. I’m also not keeping my knife collection on the living room table when a toddler is around. But at the same time I don’t need to lock them in a vault… I think we can go 90% the way, help 90% of people and that’s better than do nothing because we strive for total perfection… I’m keeping the bleach and knifes somewhere kids can’t reach. And we could say the AI services need to filter images of children. (I think the big ones already do.) And put invisible watermarks in place for all AI generated content. If anyone decides to circumvent that, that’s on them. But at least we solved the majority of very easy misuse.
And I mean that’s already how we do things. For example a spam filter isn’t 100% accurate. And we use them nonetheless.
(And I’m just arguing about service providers. That’s what the majority of people use. And I think those should be forced to do it. But the models itself should be free. Otherwise, we put a very disruptive technology solely in the hands of some big companies… And if AI is going to change the world as much as people claim, that’s bound to lead us into some sci-fi dystopia where the world revolves around the interests of some big corporations… And we don’t want that. So we need AI tech to be shaped not just by Meta and OpenAI. IMO That means giving access to the technology to the public.)