Please. Captcha by default. Email domain filters. Auto-block federation from servers that don’t respect. By default. Urgent.

meme not so funny

And yes, to refute some comments, this publication is being upvoted by bots. A single computer was needed, not “thousands of dollars” spent.

  • jollyroger@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    2
    ·
    edit-2
    1 year ago

    The admin https://lemmy.dbzer0.com/u/db0 from the lemmy.dbzer0.com instance possibly made a solution that uses a chain of trust system between instances to whitelist each other and build larger whitelists to contain the spam/bot problem. Instead of constantly blacklisting. For admins and mods maybe take a look at their blog post explaining it in more detail. https://dbzer0.com/blog/overseer-a-fediverse-chain-of-trust/

    • star_boar@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      db0 probably knows what they’re talking about, but the idea that there would be an “Overseer Control Plane” managed by one single person sounds like a recipe for disaster

      • jollyroger@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I hear you. For what it’s worth it is mentioned in the end of the blog post, the project is open source, people can run their own overseer API and create less strict or more strict whitelists, instances can also be registered to multiple chains. Don’t mistake my enthousiasm for self run open social media platforms for trying to promote a single tool as the the be-all and end-all solution. Under the swiss cheese security model/idea, this could be another tool in the toolbox to curb the annoyance to a point where spam or bots become less effective. Edit: *The be-all and end-all *not be and end all solution

        • prlang@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Couldn’t agree more. I gatta say though I kinda find it funny that the pirate server is coming up with practical solutions for dealing with spam in the fediverse. I guess it shouldn’t though, y’all have been dealing with this distributed trust thing for a while now eh?

          • FlowerTree@pawb.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            When you’re a swashbuckling pirates in the lawless seven seas, you gotta come up with clever ways to enforce your ship’s code of conduct.

    • Ech@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      1 year ago

      So defeating the point of Lemmy? Nah, that’s a terrible “solution” that will only serve to empower big servers imposing on smaller or even personal one’s.

      • prlang@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        1 year ago

        It’s probably the opposite. I’d say that right now, the incentives for a larger server with an actual active user base is to move to a whitelist only model, given the insane number or small servers with no activity but incredibly high account registrations happening right now. When the people controlling all of those bot accounts start flexing their muscle, and flooding the fediverse with spam it’ll become clear that new and unproven servers have to be cut off. This post just straight up proves that. It’s the most upvoted Lemmy post I’ve ever seen.

        If I’m right, and the flood of spam commeth then a chain of trust is literally the only way a smaller instance will ever get to integrate with the wider ecosystem. Reaching out to someone and having to register to be included isn’t too much of an ask for me. Hell, most instances require an email for a user account, and some even do the questionnaires.

        • Ech@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          When those "someone"s are reasonable, sure, it won’t be bad, but when they’re not? Give the power of federation to a few instances, and that’s not just a possibility, but an inevitability.

          We already know Meta is planning to add themselves to the Fediverse. Set down this path and the someone deciding who gets access and how will end up being Zuck, or someone like him. That sound like a good future to you?

          • prlang@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Sorry for the late response, I fell asleep.

            Yeah I’m concerned about that too. It really doesn’t matter what anyone does if a group the size of Meta joins the fediverse though. They have tens of thousands of engineers working for them, and billions of users, they can do whatever the hell they want and it’ll completely swamp anyone else’s efforts.

            Zuck wanting to embrace, extend, and extinguish the ActivityPub protocol is a separate issue though. The way a chain of trust works, when you grant trust to a third party, they can then extend trust to anyone they want. So for instance, if the root authority “A” grants trust to a second party “B”, then “B” can grant trust to “C”, “D”, and “E”. If “A” has a problem with the users of “E”, the only recourse he has is to talk to “B” and try to get them to remove “E”, or ban “B” through “E” altogether. I think we can both agree that the latter action is super drastic, it mirrors what Behaw did, and will piss a lot of people off.

            So if you run that experiment, and any particular group can become a “root” set of authority for the network, I’d speculate that the most moderate administrators will likely end up being the most widely used over time. It’s kinda playing out like that at a small scale right now with the Behaw/Lemmy.world split. Lemmy.world is becoming the larger instance, Behaws still there but just smaller and more moderated.

            People can pick the whitelists they want to subscribe to. Who gets to participate in a network really just comes down to the values of the people running and participating in it. A chain of trust is just a way to scale people’s values in a formal way.

    • lukas@lemmy.haigner.me
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      Neat, but I appreciate the email model of spam protection more than simple dumb whitelists. I won’t list my domain on any whitelist as whitelists discourage what Lemmy needs the most: People who run their own instances. At the end of the day, spammers will automate the process of listing themselves, and the person who runs their own instance has to go around doing everything manually.

      • prlang@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        The blog post dives into how it’s hard for spammers to automate adding themselves onto the whitelist because its a chain of trust. You have to have an existing instance owner to vouch for you, which they can revoke at any time. A spammer couldn’t do things like run a “clean” instance, and then whitelist off that, because presumably someone would try to contact the owner of the presumed “clean” instance to get them to remove the spam. When they don’t respond, or only partially address the issue, it’s possible to pull rank and contact the person further up the chain of trust.

        In short, it’s real people talking to each other about spam issues, but in a way that scales so that an owner of one instance doesn’t need to personally trust and know every other instance owner. It should allow for small single user instances to get set up about as easily as any other instance. Everyone has to know and talk to someone along the chain.

        The real downside of the system is that people are human, and cliques are going to form that may defederate swathes of the fediverse from each other. I kinda think that’s going to happen anyways though.

        A chain of trust is the best proposal I’ve seen for addressing the scaling issues associated with the fediverse. I’m not associated with that guy at all, just saying I like his idea.

        – edit

        On second thought, getting your instance added to the chain of trust is literally no more difficult than signing up for an instance with a questionnaire. It’s basically that but at the instance level instead of the user level.

        • star_boar@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Regarding your edit, it can’t be that easy since spammers could just generate thousands of AI-written responses to questionnaires

          • prlang@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Right, an instance owner has to endorse another on an ongoing basis though. So for example, if an instance owner named Bob initially trusts a spammer based on a questionnaire, and then that guy immediately generates 100 bot accounts to start spamming with, then Bob can revoke the trust and the spammers instances get defederated.

            You also need to own a domain to run a Lemmy instance. The cheapest of which are only a few dollars a year, which isn’t much but it does put at least some floor on peoples ability to generate instances that’ll just get banned.

            • jarfil@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Could it be a subdomain, though? What if a spammer started a “Lemmy instance as a service” on “legit.ml”, and started creating instances on “lemmy.u<number>.legit.ml”? What if some of the instances were actually legitimate, while thousands of others weren’t? What if… oh well, the rabbit hole goes deep on this one.

    • mlaga97@lemmy.mlaga97.space
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Obviously biased, but I’m really concerned this will lead to it becoming infeasible to self-host with working federation and result in further centralization of the network.

      Mastodon has a ton more users and I’m not aware of that having to resort to IRC-style federation whitelists.

      I’m wondering if this is just another instance of kbin/lemmy moderation tools being insufficient for the task and if that needs to be fixed before considering breaking federation for small/individual instances.

      • Raiden11X@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        He explained it already. It looks for a ratio of number of users to posts. If your “small” instance has 5000 users and 2 posts, it would probably assume a lot of those users would be spam bots. If your instance has 2 users and 3 posts, it would assume your users are real. There’s a ratio, and the admin of each server that utilizes it can control the level at which it assumes a server is overrun by spam accounts.

        • Irisos@lemmy.umainfo.live
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          The issue is that it could still be abused against small instances.

          For example, I had a bit less than 10 bots trying to signup to my instance today (I had registration with approval on) and those account are reported as instance users even though I refused their registration. Because of this my comment/post ratio per user got a big hit with me being unable to do anything (other than delete those accounts directly from the db).

          So even if you don’t allow spam accounts to get into your instance, you can easily get blacklisted from that list because creating a few dozen thousands account registration requests isn’t that hard even against an instance protected by captcha.

        • mlaga97@lemmy.mlaga97.space
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Okay, so how do you bootstrap a new server in that system?

          What do you do when you just created a server and can’t get new users because you aren’t whitelisted yet?

          But what if you do handful of users to start out, or just yourself? How do become ‘active’ without being able to federate with any other servers? Talk with yourself?

      • prlang@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It’s been answered further below. Yeah it’s that one bloke who did it at https://lemmy.dbzer0.com/u/db0 . The projects also open source though, so anyone can run their own Overseer Control server, with their own chain of trust whitelist. I suspect many whitelists will pop up as the fediverse evolves.