There is clearly a problem that most of the politics and news communities on Lemmy are unpleasant places to take part in discussion. People yell at each other. The tone of disagreements is that of saying what your opinion is, and insulting the other person if they don’t agree with your opinion, or a bunch of people giving quick one-off statements like “well I think it’s this way” or “no you’re wrong” which adds nothing. I’ve heard more than one person say that they simply don’t participate in politics or news communities because of it.

Well, behold:

I have made some technology which attempts to take a much heavier handed approach to moderation, by just detecting assholes or people who aren’t really contributing to the conversation, in other communities, and just banning them pre emptively en masse. In its current form, it bans about half of hexbear and lemmygrad, and almost all of the users on who post a nonstop stream of obnoxiously partisan content. You know the ones.

In practice it’s basically a whitelist for posting that’s easy to get on: Just don’t be a dick.

I’d like to try the experiment of having a political community with this software running the banlist, and see how it works in practice, and maybe expand it to a news community that runs the same way. There’s nothing partisan about the filtering. You can have whatever opinion you want. You just can’t be unproductive or an asshole about the way you say your opinion. And the bans aren’t permanent, they are transient based on the user’s recent past behavior.

(Edit: I think making a general news community might fit better with slrpnk than politics. In thinking about it and talking with people, I think electoral politics just doesn’t belong in the slrpnk feed, but maybe general news specifically with the political bickering that comes along with it being muted, would be a positive for the instance at the same time as I get to test out my little software project.)

I don’t want to explain in too much detail how the tech works, because I think some segment of assholes will want to evade the tech to come into the community and be assholes again. But I’d also want to set up a meta community where anyone who did get banned can ask questions or make complaints about it. (As long as that offering doesn’t turn into too much of a shit show that is.)

Is slrpnk a place where a little experiment like this could find a good home? What does everyone think of the idea?

  • I have some questions::

    1. Will there be discussion before banning or only after banning?

    2. Will the ban system be reviewed regularly and by whom?

    3. Are you open to discussing the technology you claim to have built for this? In my opinion, denying transparency and relying on security by obscurity of a closed-source algorithm makes me question the algorithm and also reminds me of moderation on Meta and YouTube.

    4. Have you attempted this method of tone policing with manual moderation in any communities first? If so, how did it go?

    5. Is this post satire?

    • auk@slrpnk.netOP
      26 days ago

      My vision is that if some person is unable to post, and wants to post asking why, I can give them some sort of answer (similar to what I said to Alice in another message here). The ban decision is never permanent, either, it’s just based on the user’s recent and overall posting history. If you want to be on the whitelist, there’s specific guidance on what you “did wrong” so to speak, and if you decide the whole thing is some mod overreach one viewpoint whitewash and you want no part of it, that’s okay too. My hope is that it winds up being a pleasant place to discuss politics without being oppressive to anyone’s freedom of speech or coming across as arbitrary or bad, but that is why I want to try the experiment. Maybe the bot in practice turns out to be a capricious asshole and people decide that it (and me) are not worth dealing with.

      The whole model is more of a private club model (we’ll let you in but you have to be nice), different from the current moderation model. The current implementation would want to exclude about 200 users altogether. Most are from or (And 3 from slrpnk. I haven’t investigated what those people did that it didn’t like.)

      Specific answers to your questions:

      1. Only after. The scale means it would be unworkable to try to talk to every single person before. Transparency of talking to people after, if they wanted to post and found out they couldn’t, I think is an important part.
      2. I think necessarily yes. I envision a community which is specifically for ban complaints and explanations for people who want them, although maybe that would develop as a big time sink and anger magnet. I would hope that after a while people have trust that it’s not just me secretly making a list of people I don’t like, or something, and then that type of thing would quiet down, but in the beginning it has to be that way for there to be any level of trust, if I’m trying to keep the algorithm a secret.
      3. It’s a fair question. Explaining how the current model works exposes some ways to game the system and post some obnoxious content without the bot keeping you out. But, I like the current model’s performance at this difficult task. So I want to keep the way it works now and keep it secret. I realize that’s unsatisfying of course. I’m not categorically opposed to the idea of publishing the whole details, even making it open source, so people can have transparency, and then if people are putting in the effort to dodge around it then we deal with that as it comes.
      4. None.
      5. Not at all.

      I thought about calling the bot “unfairbot”, just to prime people for the idea that it’s going to make unfair decisions sometimes. Part of the idea is that because it’s not a person making personal decisions, it can be much more heavy handed at tone policing than any human moderator could be without being a total raging oppressive jerk.

      • Can you please comment on:

        1. What programming and/or scripting languages are used in your tool
        2. Whether is uses an LLM
        3. How the algorithm functions from a high level
        4. What user data is stored on your machine
        5. If 4 applies, then any measures taken to secure that data and maintain privacy.

        My intention is not to be pedantic, but to learn more about your proposed solution. I do appreciate your thoughtful answers in the comments here.

        • auk@slrpnk.netOP
          24 days ago

          I don’t want to go into any detail on how it works. Your message did inspire me, though, to offer to explain and demonstrate it for one of the admins so there isn’t this air of secrecy. The point is that I don’t want the details to be public and make it easier to develop ways around it, not that I’m the only one who is allowed to know what it is doing.

          I’ll say that it draws all its data from the live database of a normal instance, so it’s not fetching or storing any data other than what every other Lemmy instance does anyway. It doesn’t even keep its own data aside from a little stored scratch pad of its judgements, and it doesn’t feed comment data to any public APIs in a way that would give users’ comments over to be used as training data by God knows who.