• GeneralRetreat@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate.

    Interesting approach, but I can’t help but feel the actual utility is fairly limited. For example, I could see it being useful for large corporate creative studios that have contractual / union agreements that govern AI content usage.

    If they’re using enterprise tools that build in C2PA, it’d give them a metadata audit trail showing exactly when and where AI was used.

    That’s completely useless in the context where AI content flagging is most useful though. As the quote says, this provenance data is applied at the point of creation, and in a world where there are open source branches of generation models, there’s no way to ensure provenance tagging is built in.

    This technology is most needed to combat AI powered misinformation campaigns, when that is the use case this is least able to address.

  • vhstape@beehaw.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    I really like this idea, but I don’t think it should be opt-in. Generative AI tools have such a high potential for misuse that some form of provenance should be baked into the network architecture