• The Doctor@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    I think the best PR strategy there would be to just say nothing. Just like they did when they were mirroring folks’ sites to collect images and text to train their models on. By the time folks realized that their stuff had been used for training without their consent it was far too late, and here we are.

    In other words, it won’t stop AI companies because they’re already bad actors, and they’re acting just like bad actors do anyway. Even “we’re not evil” isn’t even a thing they bother saying anymore, because everybody already knows it’s only meaningless mouth noises (case in point, the Big G).

    • frog 🐸@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      I think they got away with a lot of stuff because they were largely operating outside of public awareness. I fully agree that they’re bad actors, but they’re definitely trying to pull the “we’re not evil” thing. Nobody who keeps up with technology news believes it, but a lot of the general public seem to.

      What I think these tools achieve, mostly, is allowing artists to make it very explicit that they don’t give consent for their work to be used. It takes it out of the grey area of “anything on the internet can be used”, because when an artist runs their photos/artworks through tools like this, that’s a deliberate act to deny consent. It won’t stop the AI developers from trying to get around it (and there’s going to be an arms race between them and the developers of the tools), but it cuts off any “consent is assumed by posting it online” arguments (which were weak to start with, but still). Honestly, these tools are a short-term measure until the law catches up. And it seems very unlikely that the concept of copyright is going to be abolished for AIs.