A trial program conducted by Pornhub in collaboration with UK-based child protection organizations aimed to deter users from searching for child abuse material (CSAM) on its website. Whenever CSAM-related terms were searched, a warning message and a chatbot appeared, directing users to support services. The trial reported a significant reduction in CSAM searches and an increase in users seeking help. Despite some limitations in data and complexity, the chatbot showed promise in deterring illegal behavior online. While the trial has ended, the chatbot and warnings remain active on Pornhub’s UK site, with hopes for similar measures across other platforms to create a safer internet environment.

  • HonoraryMancunian@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    Another question is, how will the authorities know the difference? An actual csam-haver can just claim it’s AI

        • cumming_normi@yiffit.net
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          8 months ago

          Because “CSAM” states abuse as the third word in the acronym. Machine learning could (in theory, I lack knowledge on the current implementations) be trained without any children being abused (in any traditional sense anyway) and used to produce the content without any real children being involved (ignoring training data).

          The downvotes likely come from a difference in definition between abuse and CP, images of nonexistent people cannot realistically harm anyone.

          • FilthyHookerSpit@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            8 months ago

            Personally, I don’t think it’s arbitrary. A child in a sexual scenario is a depiction of abuse. Normal, healthy children don’t engage in such behaviors.