Why YSK: Beehaw defederated from Lemmy.World and Sh.itjust.works effectively shadowbanning anyone from those instances. You will not be able to interact with their users or posts.

Edit: A lot of people are asking why Beehaw did this. I want to keep this post informational and not color it with my personal opinion. I am adding a link to the Beehaw announcement if you are interested in reading it, you can form your own views. https://beehaw.org/post/567170

  • SpaceCowboy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    Yeah lemmy.world has a more open sign up which is a double edged sword. It’s good in that it’s easier to set up an account and start talking.

    But the other side of it is that it’s also easier for shitty people to sign up. The kind of people that will say shitty things to the LGBTQ+ communities on beehaw.

    So yeah, you might want to consider signing up for an account on an instance that’s a little more selective. You’ll probably have to write up a few paragraphs introducing yourself, and it might take a little time for it to be reviewed.

    • t0e@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      AI is going to mess with that process so fast I’d be surprised if that hasn’t happened already. While it seems unavoidable, still probably a good idea to have the personal question text box for now. But it seems like only a stopgap. We’ll need something better.

      But how do you proceduralize moderation? Even though it will raise operation costs, it might be necessary to host our own AI on the back end of each opted-in instance, and provide the tools to train it on content that the admins of that instance find objectionable.

      There would be growing pains of course, where some of our comments are held to be reviewed by participating moderators, who are themselves selected by an AI trained on content the admins of the instance find to be exceptional. And it would help to label and share the tensors we mine from this, so a new instance could gain access to a common model and quickly select a few things they don’t want in their instance, even giving them the ability to automatically generate a set of rules based on the options they selected when building the AI for their instance.

      It would take some time for all the instances to figure out which groups they do and don’t want to connect with, both in terms of internal users and external instances. I think you’d end up with two distinct clumps of communities that openly communicate within their clump, with a bigger blurrier clump between them, of centrists, with whom most communities communicate. But on either side there would almost certainly be tiny factions clumped together, who don’t communicate with most of the centrist groups, on the basis that they communicate with the other side. And there will always be private groups as well, some of which may choose their privacy on the basis that they refuse to communicate with any group that communicates with the centrist cloud.

      And in most of our minds, the two groups in question are probably political, but I think a similar pattern will play out in any sufficiently large network of loosely federated instances, even if the spectrum is what side of a sports rivalry you’re on. If we get to the point where there’s an instance or more in almost every household, we may be able to see these kinds of networks form in realtime.

      But the question I can’t seem to answer: Is it good? Or rather, is it good enough?

      People always think of what they would do if they had a time machine and could go back and “change things.” But in terms of federated social media, we already are back, almost at the start. So, if we’re going to think of a better way, now would be a good time.

      If we start to see a high degree of polarization among the instances of lemmy, what is the right thing to do about that? To all turn our backs, take our content and go home, make sure they have to have accounts on our side to see it, and if they ever make a subversive comment on our side of the fence, it’s removed before a human can ever see it, only spot-checked occasionally to make sure the bot is not being too harsh? Because that is one way of doing it, and maybe it’s the right way. If we train the AI well enough. Which depends on many of us doing that well enough across many instances. Maybe that is how you defeat Nazis, to make sure they can only talk about Nazi things in a boring wasteland of their own design.

      But I worry. Once instances are better networked, becoming more about quantity than size, and billionaires are able to set up “instance farms” where AI bots try to influence the rest of the fediverse en masse, will we be ready to head it off? Or similar to how we can’t see the Nazis crawling out from their wasteland to get higher quality memes, will we end up paling around with the bots designed to make our society trend toward slavery while their energy consumption raises the cost of the electricity we have to work for? Of course, if the bots do end up more convincingly human than humans can ever be, who am I to say they don’t deserve a larger cut of our power?

      • SpaceCowboy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        But how do you proceduralize moderation?

        You don’t. That’s something you need a person to do.

        All the big corporations have been spending ridiculous amounts of money on algorithms to solve these problems and what have they come up with? Does it feel like the algorithms on the corporate social media sites have been working well?

        You can’t come up with an algorithm that can solve human interaction. People will just constantly probe any algorithm to discover it’s weaknesses and exploit them. They’ll come up up with systems with code words, stochastic terrorism, implied threats of violence that an algorithm won’t notice but the recipient of the message will understand.

        One of the effects of social media has been that it’s convinced everyone that people shouldn’t be trusted. That may be true, but it seems we can’t trust algorithms either. We just have to accept that no system that humans are involved in can ever be perfect. Best we can do is try to identify people that are intelligent, responsible, and exercise good judgment to do the job of moderation. Sure people will make mistakes, but so do algorithms. But unlike algorithms, people are capable of empathy. There are are certainly bad people out there, but there are more good people than bad people. And the bad people will exploit an algorithm more easily than they can manipulate an intelligent person that has good judgement.

        Is it good? Or rather, is it good enough?

        I think good enough is all that’s possible in any system that involves humans. And social media is going to involve humans, no way around that. But that’s fine isn’t it? It’s good enough.

        If we start to see a high degree of polarization among the instances of lemmy, what is the right thing to do about that?

        Well everyone has a right to say what they want. But everyone else has the right to ignore people they aren’t interested in listening to. I don’t see things like defederation as a bug, it’s a feature. I think it can be improved, make it clear to the users what’s happening. Maybe there should be an in-between state where instance aren’t completely defederated but the admin can indicate some servers have questionable content the users on their server have to opt in to see.

        The key here is to get away from the idea of controlling content and controlling the users. Maximize choice. People choose their server. The admin can choose to ban them. The User can then choose another server (or even set up their own). The users choose the server based on it’s moderation policies and which servers it’s federated with. Admins choose which servers to federated with. Users can choose not to view content from certain servers. Mods choose which server their communities are hosted on and also can choose to ban users. Users choose communities.

        Yup. It’s all one big mess. But any system with humans making choices is always going to be a mess.

        We’ve tried the corporate model with algorithmic control over everything. It was a failure. So let’s get messy!

        Of course, if the bots do end up more convincingly human than humans can ever be, who am I to say they don’t deserve a larger cut of our power?

        I’m a fan of Phillip K Dick’s work. Also Robocop. What’s the difference between a human mind and an algorithm? Turing was wrong about it being intelligence, because humans are dumb as fuck. It’s empathy. That’s the difference.

        The corporations didn’t just take away Alex Murphy’s humanity, they were taking away everyone’s humanity. Very few people in Robocop have any empathy for anyone else.

        Why would you flip over a tortoise in a desert? You wouldn’t. Because you’re a human and you have empathy.

        The only way an AI would be indistinguishable from a human is if it had empathy. But if the AI has empathy, it would be on our side, not on the side of an evil corporation.

        Anyway I’m tired, not sure if this makes sense.

        Good night!