Why YSK: Beehaw defederated from Lemmy.World and Sh.itjust.works effectively shadowbanning anyone from those instances. You will not be able to interact with their users or posts.

Edit: A lot of people are asking why Beehaw did this. I want to keep this post informational and not color it with my personal opinion. I am adding a link to the Beehaw announcement if you are interested in reading it, you can form your own views. https://beehaw.org/post/567170

  • SpaceCowboy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    But how do you proceduralize moderation?

    You don’t. That’s something you need a person to do.

    All the big corporations have been spending ridiculous amounts of money on algorithms to solve these problems and what have they come up with? Does it feel like the algorithms on the corporate social media sites have been working well?

    You can’t come up with an algorithm that can solve human interaction. People will just constantly probe any algorithm to discover it’s weaknesses and exploit them. They’ll come up up with systems with code words, stochastic terrorism, implied threats of violence that an algorithm won’t notice but the recipient of the message will understand.

    One of the effects of social media has been that it’s convinced everyone that people shouldn’t be trusted. That may be true, but it seems we can’t trust algorithms either. We just have to accept that no system that humans are involved in can ever be perfect. Best we can do is try to identify people that are intelligent, responsible, and exercise good judgment to do the job of moderation. Sure people will make mistakes, but so do algorithms. But unlike algorithms, people are capable of empathy. There are are certainly bad people out there, but there are more good people than bad people. And the bad people will exploit an algorithm more easily than they can manipulate an intelligent person that has good judgement.

    Is it good? Or rather, is it good enough?

    I think good enough is all that’s possible in any system that involves humans. And social media is going to involve humans, no way around that. But that’s fine isn’t it? It’s good enough.

    If we start to see a high degree of polarization among the instances of lemmy, what is the right thing to do about that?

    Well everyone has a right to say what they want. But everyone else has the right to ignore people they aren’t interested in listening to. I don’t see things like defederation as a bug, it’s a feature. I think it can be improved, make it clear to the users what’s happening. Maybe there should be an in-between state where instance aren’t completely defederated but the admin can indicate some servers have questionable content the users on their server have to opt in to see.

    The key here is to get away from the idea of controlling content and controlling the users. Maximize choice. People choose their server. The admin can choose to ban them. The User can then choose another server (or even set up their own). The users choose the server based on it’s moderation policies and which servers it’s federated with. Admins choose which servers to federated with. Users can choose not to view content from certain servers. Mods choose which server their communities are hosted on and also can choose to ban users. Users choose communities.

    Yup. It’s all one big mess. But any system with humans making choices is always going to be a mess.

    We’ve tried the corporate model with algorithmic control over everything. It was a failure. So let’s get messy!

    Of course, if the bots do end up more convincingly human than humans can ever be, who am I to say they don’t deserve a larger cut of our power?

    I’m a fan of Phillip K Dick’s work. Also Robocop. What’s the difference between a human mind and an algorithm? Turing was wrong about it being intelligence, because humans are dumb as fuck. It’s empathy. That’s the difference.

    The corporations didn’t just take away Alex Murphy’s humanity, they were taking away everyone’s humanity. Very few people in Robocop have any empathy for anyone else.

    Why would you flip over a tortoise in a desert? You wouldn’t. Because you’re a human and you have empathy.

    The only way an AI would be indistinguishable from a human is if it had empathy. But if the AI has empathy, it would be on our side, not on the side of an evil corporation.

    Anyway I’m tired, not sure if this makes sense.

    Good night!