• mo_ztt ✅@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    The SynthID watermark is meant to be impossible for you to see in an image but easy for the detection tool to spot. Google’s ready and willing for it to get tested and broken.

    Well this sounds promising!

    That’s as technical as Hassabis and Google DeepMind want to be for now. Even the launch blog post is sparse on details because SynthID is still a new system. “The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” Hassabis says

    Oh. Never mind.

    Also, on an unrelated note - I actually do think that the possibility for deepfakes to create evidence for something that didn’t happen, as a political problem, is a little bit overblown. I say that because Fox News and Donald Trump have already created a whole alternate reality for their fans to inhabit, and all that was really needed was bald-faced lying. Maybe I am wrong, but I actually think it might be counterproductive for them to base their alternate reality on cunning fakes that stand up to scrutiny (e.g. their viewers can run SynthID on them, because they defeated the SynthID by cleverly processing it out or something). The “big lie” strategy is working fine, and I don’t think they would want to lead people down the path of “you should verify the evidence that I’m presenting and make sure for yourself that it’s genuine”… it’s easier and safer just to present bullshit and swear that it’s gold and have the followers say it’s gold because that’s what they were told.

    Like I say, I do support trying to address the is-it-real problem (e.g. if a video is presented in court, it would be nice if the court had a way to verify that it’s genuine and not AI-generated), but the “fake news” problem is a totally separate class of problem unfortunately.