This is second-hand, so take it with a grain of salt, but I’ve seen mention of a bug that sometimes causes the same graphql query to be executed in an infinite loop (presumably they’re async requests, so the browser wouldn’t lock and the user wouldn’t even notice).
So they may essentially be getting DDOS’d by their own users due to a bug on their end.
If it is, he’s even dumber than I thought. You stop scrapping by setting a rate limit to something comfortable for humans but painfully slow for scrappers. Something like 60 tweets per minute would all but ensure that humans aren’t affected and that scrappers won’t get anywhere.
That’s my bet too. They weren’t hosting the site itself on GCP but they were using them for trust and safety services, and I bet that one of those services was anti scraping prevention with things like ip blocking and captchas, which would explain why scraping suddenly became a problem for them the day their contract ended. It can’t be a coincidence.
Oh yeah I completely forgot about that particular idiocy, Elmo gets up to so much stupid shit that it’s hard to keep track.
But I’d also be willing to bet money on this being somehow at least partially tied to ditching GC, likely due to not being able to pay (at that’s what is implied by them refusing to pay the bill.) I guess Elmo thought “how hard can running some servers be? I’m a rokit skientist” and decided to just skip paying the bill as a power move instead of trying to make a deal with Google, and now the remaining developers, ops people etc. – those poor bastards – are paying the price.
I wonder what’s actually going on; I doubt it’s about “scraping” and “manipulation”
This is second-hand, so take it with a grain of salt, but I’ve seen mention of a bug that sometimes causes the same graphql query to be executed in an infinite loop (presumably they’re async requests, so the browser wouldn’t lock and the user wouldn’t even notice).
So they may essentially be getting DDOS’d by their own users due to a bug on their end.
Edit: better info: https://sfba.social/@sysop408/110639435788921057
Ha, that’s hilarious. Absolutely not a surprise, though
If it is, he’s even dumber than I thought. You stop scrapping by setting a rate limit to something comfortable for humans but painfully slow for scrappers. Something like 60 tweets per minute would all but ensure that humans aren’t affected and that scrappers won’t get anywhere.
For my money I would bet the issue stems from abandoning Google server hosting, either from arrogance or being unable to afford it.
That’s my bet too. They weren’t hosting the site itself on GCP but they were using them for trust and safety services, and I bet that one of those services was anti scraping prevention with things like ip blocking and captchas, which would explain why scraping suddenly became a problem for them the day their contract ended. It can’t be a coincidence.
Oh yeah I completely forgot about that particular idiocy, Elmo gets up to so much stupid shit that it’s hard to keep track.
But I’d also be willing to bet money on this being somehow at least partially tied to ditching GC, likely due to not being able to pay (at that’s what is implied by them refusing to pay the bill.) I guess Elmo thought “how hard can running some servers be? I’m a rokit skientist” and decided to just skip paying the bill as a power move instead of trying to make a deal with Google, and now the remaining developers, ops people etc. – those poor bastards – are paying the price.