

New case popped up in medical literature: A Case of Bromism Influenced by Use of Artificial Intelligence, about a near-fatal case of bromine poisoning caused by someone using AI for medical advice.
New case popped up in medical literature: A Case of Bromism Influenced by Use of Artificial Intelligence, about a near-fatal case of bromine poisoning caused by someone using AI for medical advice.
Discovered new manmade horrors beyond my comprehension today (recommend reading the whole thread, it goes into a lot of depth on this shit):
Theyāre allegedly useful for visually impaired users, but I strongly doubt Zuckerbergās thought of that particular untapped market, or interested in taking advantage of it.
Iāve already predicted Meta Rayban wearers would be assaulted in the street, but I wouldnāt be shocked if visually impaired peeps kept away from them as well - whatever accessibility boons they may grant, it is not worth getting called a creepshotter and/or your ass getting beaten.
(Sidenote: This is the second time Iāve seen a tech accessibility related shitshow so far - the first was some alt-text drama I ran into three days ago.)
Found an AI bro making an incoherent defense of AI slop today (fitting that he previously shilled NFTs):
Needless to say, heās getting dunked on in the replies and QRTs, because people like him are fundamentally incapable of being punk.
Wikipedia also just upped their standards in another area - theyāve updated their speedy deletion policy, enabling the admins to bypass standard Wikipedia bureaucracy and swiftly nuke AI slop articles which meet one of two conditions:
"Communication intended for the userā, referring to sentences directly aimed at the promptfondler using the LLM (e.g. "Here is your Wikipedia article onā¦,ā āUp to my last training update ā¦,ā and "as a large language model.ā)
Blatantly incorrect citations (examples given are external links to papers/books which donāt exist, and links which lead to something completely unrelated)
Ilyas Lebleu, who contributed to the update in policy, has described this as a āband-aidā that leaves Wikipedia in a better position than before, but not a perfect one. Personally, I expect this solution will be sufficent to permanently stop the influx of AI slop articles. Between promptfondlersā utter inability to recognise low-quality/incorrect citations, and their severe laziness and lack of care for their āāāworkāāā, the risk of an AI slop article being sufficiently subtle to avoid speedy deletion is virtually zero.
Cloudflare has publicly announced the obvious about Perplexity stealing peopleās data to run their plagiarism, and responded by de-listing them as a verified bot and added heuristics specifically to block their crawling attempts.
Personally, Iām expecting this will significantly hamper Perpllexity going forward, considering Cloudflareās just cut them off from roughly a fifth of the Internet.
Found someone trying to fire back at the widespread sneering against promptfondlers:
Emphasis on ātryingā here - theyāre getting cooked in the replies and QRTs. Hereās a couple highlights - one from someone running an escape room, and one which allegedly ended in someone meeting a baseballer:
Iām sure you can think of hypothetical use cases for Google Glass and Meta AI RayBans. But these alleged non-creepshot use cases already failed to keep Google Glass alive. I predict they wonāt be enough to keep Meta AI RayBans alive.
Its not an intentional use case, but its an easy way to identify people who I should keep far, far away from.
It turns out normal people really do not like this stuff, and I doubt the public image of tech bros has improved between 2014 and 2025. So have fun out there with your public pariah glasses. If you just get strong words, count yourself lucky.
On a wider note, I wouldnāt be shocked if we heard of Rapist RayBan wearers getting beaten up or shot in the street - if the torching of Waymos in anti-ICE protests, the widespread vandalism of Cybertrucks, and Luigi Mangioneās status as a folk hero are anything to go by, Iād say the conditions are right for cases of outright violence against anyone viewed as supporting the techbros.
EDIT: Un-fucked the finishing sentence, and added a nod to Cybertrucks getting fucked up.
I havenāt seen STRANGE ĆONSā video about HPMOR here, so Iād strongly recommend checking that out
Took me a second to realise this was LW.
Ran across a pretty solid sneer: Every Reason Why I Hate AI and You Should Too.
Found a particularly notable paragraph near the end, focusing on the people focusing on āprompt engineeringā:
In fear of being replaced by the hypothetical āAI-accelerated employeeā, people are forgoing acquiring essential skills and deep knowledge, instead choosing to focus on āprompt engineeringā. Itās somewhat ironic, because if AGI happens there will be no need for āprompt-engineersā. And if it doesnāt, the people with only surface level knowledge who cannot perform tasks without the help of AI will be extremely abundant, and thus extremely replaceable.
You want my take, Iād personally go further and say the people who canāt perform tasks without AI will wind up borderline-unemployable once this bubble bursts - theyāre gonna need a highly expensive chatbot to do anything at all, theyāre gonna be less productive than AI-abstaining workers whilst falsely believing theyāre more productive, theyāre gonna be hated by their coworkers for using AI, and theyāre gonna flounder if forced to come up with a novel/creative idea.
All in all, any promptfondlers still existing after the bubble will likely be fired swiftly and struggle to find new work, as they end up becoming significant drags to any companyās bottom line.
Starting this off with a new Blood in the Machine: The AI bubble is so big itās propping up the US economy (for now)
I donāt need that, in fact it would be vastly superior to just āstealā from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.
Iād say its a combo of them feeling entitled to plagiarise peopleās work and fundamentally not respecting the work of others (a point OpenAIās Studio Ghibli abomination machine demonstrated at humanityās expense.
On a wider front, I expect this AI bubbleās gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.
Coming back to this, because someone else made a better dunk on Saatchiās Shit Generator than I could:
I urge you to watch the proof of concept video this links to, itās astonishing. finally ai has reached the level of a 12 year old on newgrounds who gets every submission blammed.
Ran across a notable post on Bluesky recently - seems thereās some alt-text drama thatās managed to slip me by:
On a wider note, I wouldnāt be shocked if the AI bubble dealt some setbacks to accessibility in tech - given the post Iāve mentioned earlier, thereās signs its stigmatised alt-text as being an AI Bro Thingā¢.
With Trumpās administration overdosing on crypto and purging competence at all levels, chances are we may see someone pull this kinda shit on the US gov itself.
Okay, complete shot in the dark here - the āhumanoid robotā part is an attempt to convince investors theyāre making AI more humanlike or some shit like that
Saatchi says you can type in a few words and the AI will generate scenes ā or even a whole show. There are two test shows. One is Exit Valley, which is a copy of South Park set in Silicon Valley. Hereās an excerpt. [Vimeo]
For anyone who decides not to click, youāre not missing out - the āepisodeā was equivalent to one of those millions of shitty GoAnimate āgroundedā animations that you can find on YouTube. (in retrospect, GoAnimate/Vyond was basically AI slop before AI slop was a thing)
The closest that has to a use case is the guys who will do obnoxious parodies because the rights holders wonāt like them. Letās get Mickey Mouse doing something edgy!
Considering Tay AI was deliberately derailed into becoming a Hitler-loving sex robot, and the first wave of AI slop featured deliberately offensive Pixar-styled āpostersā, I can absolutely see this happening. (At least until The Mouse starts threatening Showrunner with getting sued into the ground.)
New article from Matthew Hughes, about the sheer stupidity of everyone propping up the AI bubble.
Orange site is whining about it, to Matthewās delight: