• Rimu@piefed.social
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    or they could just comply with the law:

    sites will have to provide a reason to users when their content or account has been moderated, and offer them a way of complaining and challenging the decision. There are also rules around giving users the ability to flag illegal goods and services found on a platform.

    Doesn’t seem like a big deal to me.

    • tuhriel
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      It is at the scale they are working on, there’s a reason you can’t get an actual person to contact you… It’s too expensive to have actual people working these cases

      • Zworf@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 months ago

        It’s mostly actual people. I know some of them at different platforms (for some reason this city has become a bit of a moderation hub). Most of these companies take moderation very seriously and if AI is involved it’s so far just in an advisory capacity. Twitter being the exception because… well, Elon.

        But their work is strictly internally regulated based on a myriad of policies (most of which are not made public especially to prevent bad actors from working around them). There usually isn’t much to discuss with a user nor could it really go anywhere. Before a ban gets issued the case has already been reviewed by at least 2 people and their ‘accuracy’ is constantly monitored by QA people.

        Most are also very strict to their employees. No remote work, no phones on the workfloor, strong oversight etc… To make sure cases are handled personally and employees don’t share screenshots of private data.

        And most of them have a psychologist on site 24/7. It’s not much fun watching the stuff these people get to deal with on a daily basis. I don’t envy them.