Why would people be doing this? Is it sabotage or just misguided?
Hanlon’s razor seems to work well here. I wouldn’t be surprised if it were a mix of people who want some real or imagined benefit from bug reports without doing or understanding the work, and people who just think LLM output is gospel—a gospel that must be spread.
Agreed, but also: if it works and is merged, you get credited, and your Github account gets a better reputation. This makes it easier to deploy attacks like xz as you have a track record of merges.
Also, plain vandalism, because people are like that.Edit: probably also bug bounty attempts. If you’ve ever been on the receiving side of a Responsible Disclosure program , you’ll know what I mean.
Edit edit: it’s all in the article, darnit. Sorry.
Edit edit: it’s all in the article, darnit. Sorry.
It is? I must have missed it but I can’t find any discussion of motivation even on a second read-through.
I meant it’s all about security vulnerability submissions, and although not explicit in the article, those submissions are therefore very likely
- meant to up the reputation for xz-like attacks
- meant to annoy/bully the devs
- denial of service by delaying triage and therefore delaying creating patches
- submitted by boatloads in the hope of cashing in on bug bounties
Yeah, I’d count that credibility as a real benefit from helping with bugs.
As far as xz scenarios go though, the AI slop seems to be a really bad strategy.
I agree, it isn’t a great tactic, but with enough attempts you’ll probably hit a few times.
Yeah, I don’t disagree. And if you hit something small or relatively insignificant but common, that’s all you need
I ran an RD program years ago. Lots of bored and/or poor, greedy devs submitted metric shit tons of pseudo vulnerabilities (“if I do ctrl-u I can see source code on your web site!” No shit, Sherlock.). I can only imagine how much easier this has become with the help of generative ai…