A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

Registration bypass: https://archive.is/3tEl0

  • ExcessShiv@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    44
    ·
    2 days ago

    From the description, it sounds like this would only have limited effect and only short lived until guardrails are implemented in crawlers.

      • SkaveRat
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        I mean… not really. This isn’t even a defence. Any web crawler worth its salt will just stop after a while. And they do so for literally decades already

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          Indeed. And any modern AI training system is going to be extensively curating any training data that ends up being fed into the AI, probably processing it through other AIs to generate synthetic data from it. The days of early ChatGPT where LLMs were trained by just dumping giant piles of random text on them and hoping it’ll figure it out somehow are long past.

          This reminds me of Nightshade, the supposed anti-art-AI technique that could be defeated by resizing the image (which all art AI training systems do as a matter of course). It may make people “feel better” but it’s not going to have any real impact on anything.

          • ToxicWaste@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            sure, it is easy to detect and they will. however, at the moment they don’t seem to be doing it. The author said this after deploying a POC:

            Aaron B told 404 Media “If that’s, true, I’ve several million lines of access log that says even Google Almighty didn’t graduate” to avoiding the trap.

            So no, it is not a silver bullet. but it is a defense strategy, which seems to work at the moment.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              15 hours ago

              No, a few million hits from bots is routine for anything that’s facing the public at all. Others have posted on this thread (or others like it, this article’s been making the rounds a lot in the past few days) that even the most basic of sites can get that sort of bot traffic, and that it’s just a simple recursion depth limit setting to avoid the “infinite maze” aspect.

              As for AI training, the access log says nothing about that. As I said, AI training sets are not made by just dumping giant piles of randomly scraped text on AIs any more. If a trainer scraped one of those “infinite maze” sites the quality of the resulting data would be checked, and if it was generated by anything remotely economical for the site to be running it’d almost certainly be discarded as junk.

              • ToxicWaste@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 hours ago

                The main angle is not to ‘poisen’ the training set. it is to waste time, energy and resources. the site loads deliberately slow and produces garbage, which has to be filtered out.

                as i said: not a silver bullet. but at least some threads where tied up collecting garbage painfully slow. as the data is useless, whatever their cleanup process is, has more to do. or it might even be tricked into discarding the whole website, as the signal to noise ratio is bad.

                so i would still say the author achieved his goal.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  13 hours ago

                  The site producing the nonsense has to produce lots of it any time a bot comes along, the trainers only have to filter it once. As others have pointed out it’s likely easy for an automated filter to spot. I don’t see it as being a clear win.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    2 days ago

    The typical web crawler doesn’t appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too.

    So it has nothing to do with “AI training” in the usual sense.

      • HexadecimalSky@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        I think the point is it doesn’t specifically target “AI trainers” but web crawlers, which are used by more then just A.I. trainer, for example search engines.

        • xigoi@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          13 hours ago

          Search engine crawlers generally respect robots.txt, so if you add a robots.txt entry to disallow all crawlers from getting into the maze, effectively only AI crawlers will go there.

        • finley@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          edit-2
          1 day ago

          Actually, it does specifically target AI trainers, as it poisons their training data. These webcrawlers are just a means to an end.

          • HexadecimalSky@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            1 day ago

            It affects them, yes, but it doesn’t only affect them. It’s just a poison in the well tactic that can affect them. but because it isn’t specific even more companies will work to “fix it”. Also while it can waste resources, it doesn’t stop A.I. training in most cases or render them incompetent.

            For example if I add rat poison to all the local water ways, it would get rid of the pigeon problem, so it targets pigeons?

            • finley@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              edit-2
              1 day ago

              The first part of what you said, contradicts itself, and the second part of what you said is a terrible metaphor. Especially considering that these web crawlers that crawl for AI training data only target that. And this specifically target AI training web crawlers.

              So, it’s more like putting a very specific rat poison in the waterways that is only poisonous to rats.

              It seems like you don’t understand how this works.

              • nyan@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                1
                ·
                1 day ago

                And this specifically target AI training web crawlers.

                There’s no way to distinguish between an AI training crawler and any other crawler. Per https://zadzmo.org/code/nepenthes/ :

                “This is a tarpit intended to catch web crawlers. Specifically, it’s targetting crawlers that scrape data for LLM’s - but really, like the plants it is named after, it’ll eat just about anything that finds it’s way inside.

                Emphasis mine. Even the person who coded this thing knows that it can’t tell what a given crawler’s purpose is. They’re just willing to throw the baby out with the bathwater in this case, and mess with legitimate crawlers in order to bog down the ones gathering data for LLM training.

                (In general, there is no way to tell for certain what is requesting a webpage. The User-Agent header that (usually) arrives with an HTTP(S) request isn’t regulated and can contain any arbitrary string. Crawlers habitually claim to be old versions of Firefox, and there isn’t much the server can do to identify what they actually are.)

                • nef@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  14 hours ago

                  You can specifically target crawlers that ignore robots.txt, which will catch practically every LLM scraper.

              • HexadecimalSky@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                1 day ago

                From reading the article it just seems it targets web crawlers, by having a infinitely looping url. How does it target A.I. training webcrawls specifically?

    • jungle@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      It also has nothing to do with real web crawlers. Maybe the first crawlers when the web was a couple million pages were that dumb, but that’s ancient history.