We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.

  • WhoLooksHere@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    8 hours ago

    Robots.txt started I’m 1994.

    It’s been a consensus for decades.

    Why throw it out and replace it with imied consent to scrape?

    That’s why I said legally there’s nothing they can do. If people want to scrape it they can and will.

    This is strictly about consent. Just because you can doesn’t mean you should yes?

    I guess I haven’t read a convincing argument yet why robots.txt should be ignored.

    • Rimu@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      It’s been a consensus for decades

      Let’s see about that.

      Wikipedia lists http://www.robotstxt.org/ as the official homepage of robots.txt and the “Robots Exclusion Protocol”. In the FAQ at http://www.robotstxt.org/faq.html the first entry is “What is a WWW robot?” http://www.robotstxt.org/faq/what.html. It says:

      A robot is a program that automatically traverses the Web’s hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

      That’s not FediDB. That’s not even nodeinfo.

      • WhoLooksHere@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        From your own wiki link

        robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

        How is fedidb not an “other web robot”?

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      I just think you’re making it way more simple than it is… Why not implement 20 other standards that have been around for 30 years? Why not make software perfect and without issues? Why not anticipate what other people will do with your public API endpoints in the future? Why not all have the same opinions?

      There could be many reasons. They forgot, they didn’t bother, they didn’t consider themselves to be the same as a commercial Google or Yandex crawler… That’s why I keep pushing for information and refuse to give a simple answer. Could be an honest mistake. Could be honest and correct to do it and the other side is wrong, since it’s not a crawler alike Google or the AI copyright thieves… Could be done maliciously. In my opinion, it’s likely that it just hadn’t been an issue before, the situation changed and now it is. And we’re getting a solution after some pushing. Seems at least FediDB took it offline and they’re working on robots.txt support. They did not refuse to do it. So it’s fine. And I can’t comment on why it hadn’t been in place. I’m not involved with that project and the history of it’s development.

      And keep in mind, Fediverse discoverability tools aren’t the same as a content stealing bot. They’re there to aid the users. And part of the platform in the broader picture. Mastodon for example isn’t very useful unless it provides a few additional tools, so you can actually find people and connect with them. So it’d be wrong to just apply the exact same standards to it like some AI training crawler or Google. There is a lot of nuance to it. And did people in 1994 anticipate our current world and provide robots.txt with the nuanced distinctions so it’s just straightforward and easy to implement? I think we agree that it’s wrong to violate the other user’s demands/wishes now that the’re well known. Other than that, I just think it’s not very clear who’s at fault here, if any.

      Plus, I’d argue it isn’t even clear whether robots.txt applies to a statistics page. Or a part of a microblogging platform. Those certainly don’t crawl any content. Or it’s part of what the platform is designed to do. The term “crawler” isn’t well defined in RFC 9309. Maybe it’s debatable whether that even applies.