I created an account on mastodon.social a few days ago. A day after creation, my account was suspended. My appeal was denied and no reason was given. So I assumed mastodon.social was not accepting new accounts, so I moved over to mastodon.online and created an account there. Today that account was suspended as well, again without reason. I didn’t post anything from either account. My only actions were to follow a few people within tech.

Looking at previous posts here, people are laughing at complaints about difficulties of joining mastodon and pushing it away as a simple task. I have now attempted to join two of the highest suggested servers of mastodon and gotten suspended from both. I am uninterested in shotgunning servers until I find one which doesn’t suspend me without reason.

How is the onboarding process of mastodon supposed to work if the top suggested servers are suspending new accounts without warning or reason?

  • RT Redréovič@feddit.ch
    link
    fedilink
    arrow-up
    81
    arrow-down
    2
    ·
    1 year ago

    That is the Youtube link parameter for Rick Astley’s ‘Never Gonna Give You Up’. It’s understandable if Moderators unknowingly suspend an account thinking it is a bot due to such a username. But the lack of clarification or provision of reason for suspension is incompetence on their part.

    • ilinamorato@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      Think about an actual bot farm trying to infiltrate a website. They’re creating dozens of new accounts a minute using random strings of characters as usernames, and eventually to reduce the load on their admins the website implements algorithmic username screening: if it doesn’t follow at least some rules of a known language, the username is kicked out and the account banned.

      So now the owner of the bot farm realizes that essentially none of their bot usernames are getting through, but maybe this is an opportunity! They could use the bots to try to overwhelm the admins with reports/appeals; lock them up with handling those, and maybe they can sneak some human-generated usernames past the goalie and wreak some havoc while the admins are working through the garbage in the appeal backlog.

      At this point, the admins could either turn off new users for a while, until the bot farm owner gets bored and tries somewhere else (unfortunately this would probably mean a bunch of real human users also getting bored and trying Bluesky or Threads). Or they could use the information the bot farm is helpfully providing against the bot farm: snag the IP addresses from every nonsense username that’s trying to sign up, and add it to a global report/appeal ban list. Extra bonus: if you run two instances, sync your global ban list so that a bot that tries at one of them doesn’t also waste time at the other.

      Unfortunately, I think op just got false-positived.

      To be sure, all of this is fan fiction at best. I don’t know for sure if any of this is how Mastodon is running things. It’s all just educated conjecture at this point.

      • dQw4w9WgXcQ@lemm.eeOP
        link
        fedilink
        arrow-up
        14
        ·
        1 year ago

        If I were a bot farm owner, I would likely just generate more “realistic” person usernames. Generating a unique username which doesn’t look like random letters is trivial, and I don’t really think that creating that obstacle is a real hinderance to anyone.

        • ilinamorato@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          Yes, but when creating a new system you can’t just defend against new attacks. You have to defend against all the old attack vectors too.

          • dQw4w9WgXcQ@lemm.eeOP
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            I just don’t see how the username is an attack vector. The sign-up has email verification and CAPTCHA. Requiring the username to be something sensible seems excessive.

            But honestly, I don’t know. Maybe this stops a lot more bot farms than I’d expect.

              • dQw4w9WgXcQ@lemm.eeOP
                link
                fedilink
                arrow-up
                4
                ·
                1 year ago

                Emails, sure. Captchas require a fair bit of elbow grease. Generating a random username which looks fine is nothing in the landscape of bot protection.

                • ilinamorato@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  Bot farmers could find an exploit in reCaptcha. Or they could train up a neutral network to accurately defeat them (I saw someone demonstrating a GPT4 prompt that could handle it quickly and flawlessly with just a little bit of prompt engineering). When (not if) they find a way to defeat captcha, those lower level protections become way more important and relevant.

                  It’s an ever-moving set of problems; fixing it today is no guarantee that it’ll still be fixed tomorrow, so everything has to stay in place until it’s proven to no longer be effective or to cause more problems than it fixes.

                  • dQw4w9WgXcQ@lemm.eeOP
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    1 year ago

                    It just seems like the perspective is off. Implementing some script which reads images of the website which depicts the CAPTCHA, sending it to some AI-solution which can succeed some percentage of the time. Adding this to something which can interact with the website (not sure if you’ll need to indirectly act through something like selenium or if you can make direct web-calls), while also ensuring that the CAPTCHA doesn’t receive other suspicious data.

                    If you go through that trouble, I would be amazed if combining 2 or 3 words from a dictionary into a username would be the kryptonite of your bot farm.

                    Again, I don’t know, and it might be a much more preventative solution than I can understand, but it feels like a strange security by obscurity.

                • dragnucs@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  This does not make suspicious random usernames not spam. They generally are spam accounts.

                  A recent spam I just received five days ago was from @oyPhFrxPx0@mastodon.social.