Police in 20 countries have taken down a network that distributed images of child sexual abuse entirely generated by artificial intelligence.

The operation – which spanned European countries including the United Kingdom as well as Canada, Australia, New Zealand – is “one of the first cases” involving AI-generated images of child sexual abuse material, Europe’s law enforcement agency Europol, which supported the action, said in a press release.

Danish authorities led an operation, which resulted in 25 arrests, 33 house searches and 173 devices being seized.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    Even then it’s not “just as problematic” because unlike the production of the training data itself, the images it outputs don’t require any further abuse. It’s definitely a moral gray zone but I don’t think anyone would seriously argue that producing AI pictures of this sort is just as bad as taking pictures of actual abuse.

    However, what I believe to be the logic here is that these people have likely trained their own models based on their own database of images, so while the authorities are going after people based on AI pictures, it’s not the AI pictures that they’re really after.