Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentineās Day!)
The phrase āambient AI listening in our hospitalā makes me hear the āDies Iraeā in my head.
A longread on AI greenwashing begins thusly:
The expansion of data centres - which is driven in large part by AI growth - is creating a shocking new demand for fossil fuels. The tech companies driving AI expansion try to downplay AIās proven climate impacts by claiming that AI will eventually help solve climate change. Our analysis of these claims suggests that rather than relying on credible and substantiated data, these companies are writing themselves a blank cheque to pollute on the empty promise of future salvation. While the current negative effects of AI on the climate are clear, proven and growing, the promise of large-scale solutions is often based on wishful thinking, and almost always presented with scant evidence.
(Via.)
iāve collided with an article* https://harshanu.space/en/tech/ccc-vs-gcc/
you might be wondering why it doesnāt highlight that it fails to compile linux kernel, or why it states that using pieces of gcc where vibecc fails is āfairā, or why it neglects to say that failing linker means itās not useful in any way, or why just relying on āno errorsā isnāt enough when itās already known that vibecc will happily eat invalid c. itās explained by:
Disclaimer
Part of this work was assisted by AI. The Python scripts used to generate benchmark results and graphs were written with AI assistance. The benchmark design, test execution, analysis and writing were done by a human with AI helping where needed.
even with all this slant, by their own vibecoded benchmark, vibecc is still complete dogshit with sqlite compiled with it being slower up to 150000x times in some cases
This is why CCC being able to compile real C code at all is noteworthy. But it also explains why the output quality is far from what GCC produces. Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.
Every single one of these failures is waved away because supposedly itās impressive that the AI can do this at all. Do they not realize the obvious problem with this argument? The AI has been trained on all the source code that Anthropic could get their grubby hands on! This includes GCC and clang and everything remotely resembling a C compiler! If I took every C compiler in existence, shoved them in a blender, and spent $20k on electricity blending them until the resulting slurry passed my test cases, should I be surprised or impressed that I got a shitty C compiler? If an actual person wrote this code, they would be justifiably mocked (or theyāre a student trying to learn by doing, and LLMs do not learn by doing). But AI gets a free pass because itās impressive that the slop can come in larger quantities now, I guess. These Models Will Improve. These Issues Will Get Fixed.
Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.
Ye, the former can be done in a month of non-full-time work by an undergrad who took Compilers 101 this semester or in literally a single day by a professional, and the latter is an actual useful product.
So of course AI will excel at doing the first one worse (vibecc doesnāt even reject invalid C) and at an insane resource cost.
spent $20k on electricity blending them
They would probably be even more impressed that you only spent $20k
Scott Shambaugh mulls about an AI alignment issue following his run-in with a bot last week
See also https://awful.systems/post/7311930
AI bros do new experiments in making themselves even stupider. Going from āexplain what you did but dumb it down for me and my degraded attention spanā into ājust make a simplified cartoon out of itā.
Proud of not understanding what is going on. None of these people could hack the Gibson.
E: If they all hate programming so much, perhaps a change of job is in question, sure might not pay as much, but it might make them happier.
E: If they all hate programming so much, perhaps a change of job is in question, sure might not pay as much, but it might make them happier.
Surely at least a few of them have worked up enough seed capital to try their hand at used-car dealerships. I can attest that the juicier markets just outside the Bay Area are fairly saturated, but maybe they could push into lesser-served locales like Lost Hills or Weaverville.
my current favorite trick for reducing ācognitive debtā (h/t @simonw ) is to ask the LLM to write two versions of the plan:
- The version for it (highly technical and detailed)
- The version for me (an entertaining essay designed to build my intuition)
I donāt know about them, but I would be offended if I was planning something with a collaborator, and they decide to give me a dumbed down, entertaining, childrenās storybook version of their plan while keeping all the technical details to themselves.
Also, this is absolutely not what ācognitive debtā means. Iāve heard technical debt refers to bad design decisions in software where one does something cheap and easy now but has to constantly deal with the maintenance headaches afterwards. But the very concept of working through technical details? Thatās what we call āthinkingā. These people want to avoid the burden of thinking.
Eh, one might say that going by the broad strokes version while letting the expert do their thing is basically what management is all about, especially if they ignore the part where he wants his version to be light and entertaining.
This isnāt about managing subordinates though, this is about devising ways to be complacent about not double checking what the LLM generates in your name.
@Soyweiser @BlueMonday1984 I like* how the structure of the boat changes from moment to moment. I like* how the radio dishes just beam from some random place between the transmitter and the dish. I like* that the original person who was waiting for a live stream doesnāt get it (because it goes to a different group of people) and is just eating popcorn watching the mess unfold. I like* how the āaudienceā have their backs to the ālive streamā screen and are excited to be looking away from it.
I think I understand it. Think of an alcoholic thatās trying every sort of miracle hangover ācureā instead of drinking less.
@Soyweiser @BlueMonday1984 tag yourself Iām emailing HTML
Im teleporting in real time!
in todayās news about magical prompts that super totes give you superpowers:
We introduced SKILLSBENCH, the first benchmark to systematically evaluate Agent Skills as first-class artifacts. Across 84 tasks, 7 agent-model configurations, and 7,308 trajectories under three conditions (no Skills, curated Skills, self-generated Skills), our evaluation yields four key findings: (1) curated Skills provide substantial but variable benefit (+16.2 percentage points average, with high variance across domains and configurations); (2) self-generated Skills provide negligible or negative benefit (ā1.3pp average), demonstrating that effective Skills require human-curated domain expertise
I am jackās surprised face
ā¦and given I have other yaks, I shall not step on my āsoftware and tools donāt have to suckā soapbox right now
This reminds me of when Steve Jobs would introduce every new Mac release by talking about how fast it could render in Photoshop. I wonder how he would do in our brave new era of completely ass-pulling your own bespoke benchmark frameworks.
Somebody on bsky talking about various Ben Goertzel Epstein file emails.
Goertzel discusses with Epstein future scenario for āan AGI economyā:
āfor the AGI parasite to overcome to regular-human-economy host (so it can grow to be more than a parasite and eventually gain its own superhuman autonomy) it first needs to suck off the resources of the hostā
The Rationalists!

who up continvoucly morging they branches
That slopped-out ādiagramā plagiarised Vincent Driessenās āA successful Git branching modelā, BTW.
Itās funny that such a thing is rare enough in the corpus to come out so recognizably in the output.
Timn
yes, I certainly do know how to handle software development over Timn
it is actually kinda incredible that this shit has invented a way to be terrible that we canāt actually easily riff off by whatās expressable in unicode. an unholy clusterfuck of what would otherwise be be joked about as keming (but isnāt because itās straight-up an artefact of the process used to encode visual data from source data, badly), a mindless automaton outputting garbage, and then also the shitty model
and people keep telling me this shit is good
and people keep telling me this shit is good
I mean, this one is really good, I got like half an hour of jokes with my friend off it
okay I canāt argue with that outcome
wait, was this brain-rotting cognitive hazard posted at the linked page on microsoft dot com documentation? if so they have already removed it
edit: archive caught it
I checked yesterday and it was there, can confirm

what Iām thinking about is for how many years now they have been promising that just one more datacenter will fix the āhallucinationsā, yet this mess is indistinguishable from nonsense output from three years ago. I see āAIā is going well
you can count on microslop to always be behind the curve
Itās morginā time
*morginā timn
morginā all muh featues
Two thoughts:
That this is not just some random AI generated graphic, but from official Microsoft tutorial is unpleasantly unsurprising.
I think the tinm (timn ?) axis goes the wrong way.
Microsoft is really putting the āgitā in GitHub thanks to copilot.
Mighty Morginā Power-Sloppers
You have to wonder about that Tim traveler; Merlin?
Tim traveller? The YouTube channel?
A little exchange on the EA forums I thought was notable: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism?commentId=b5pZi5JjoMixQtRgh
tldr; a super long essay lumping together Nazism, Communism and religious fundamentalism (I didnāt read it, just the comments). The comment I linked notes how liberal democracies have also killed a huge number of people (in the commenterās home country, in the name of purging communism):
The United States presented liberal democracy as a universal emancipatory framework while materially supporting anti-communist purges in my country during what is often called the āJakarta Method". Between 500,000 and 1 million people were killed in 1965ā66, with encouragement and intelligence support from Western powers. Variations of this model were later replicated in parts of Latin America.
The OPās response is to try to explain how that wasnāt real āliberal democracyā and to try to reframe the discussion. Another commenter is even more direct, they complain half the sources listed are Marxist.
A bit bold to unqualifiedly recommend a list of thinkers of which ~half were Marxists, on the topic of ideological fanaticism causing great harms.
I think itās a bit bold of this commenter to ignore the empirical facts cited in how many people āliberal democraciesā had killed and to exclude sources simply for challenging your ideology.
Just another reminder of how the EA movement is full of right wing thinking and how most of it hasnāt considered even the most basic of leftist thought.
Just another reminder of how the EA movement is full of right wing thinking and how most of it hasnāt considered even the most basic of leftist thought.
I continue to maintain that EA boils down to high-dollar consumerism focused on intangible goods. Iām sure that statement wonāt fly on LW or any other EA forum, but my thoughts on psychiatry donāt fly at a Scientologist convention either.
@scruiser @BlueMonday1984 funny how they mock left wingers for Ā«that wasnāt real communismĀ» and then come up with the same excuses for liberal democracies and capitalism whenever one points out all the shit that came out of that. Itās really ALWAYS projection with them, isnāt it?
Claudio Nastruzzi of The Reg chimes in on the inherent shittiness of AI writing, coining the term āsemantic ablationā to describe its capacity to destroy whatever unique voice a text has.
https://softcurrency.substack.com/p/the-dangerous-economics-of-walk-away
- Anthropic (Medium Risk) Until mid-February of 2026, Anthropic appeared to be happy, talent-retaining. When an AI Safety Leader publicly resigns with a dramatic letter stating āthe world is in peril,ā the facade of stability cracks. Anthropic is a delayed fuse, just earlier on the vesting curve than OpenAI. The equity is massive ($300B+ valuation) but largely illiquid. As soon as a liquidity event occurs, the safety researchers will have the capital to fund their own, even safer labs.
WTF is āeven saferā ??? how bout we like just donāt create the torment nexus.
Wonder if the 50% attrition prediction comes to pass thoughā¦
the capital to fund their own, even safer labs.
I wonder, is this a theory of āsafetyā analogous to whatās driven the increased gigantism of vehicles in the US? Sure seems like it.
āeven saferā in this case means some combination of two things:
-
The new organization is more ideologically aligned with the transhumanist doom cult that apparently managed to eat the brains of the people with money to burn.
-
The new organization, largely as a result of this, is capable of sinking an unending amount of capital into buying compute time and Nvidia chips but due to their commitments to safety is even less inclined to actually deliver anything.
-
new interview with Dario Amodei dropped https://youtu.be/n1E9IZfvGMA basically exponential curve real soon, nice skepticism from both the interviewer and the comment section
On a related note, I really gotta stop browsing r/singularity man, some of the AI hype in there is just painful. though it is funny to see people with āAGI 2024/2025/2026ā flairs
EDIT: this is also the same podcast where Dario said we could have AGI in 2-3 years back in 2023. So lol
Last megathread I posted about some LWer writing about ādemographic collapseā in Japan (indistinguishable from anyone with any ambition leaving for greener pastures). Iāve read the comments to see if thereās any pushback, and found this absolute doozy
yawn, i diagnose that LWer with weeb. this is something happening across entire industrialized world, causes being high performance mechanization of agriculture, old people being stubborn in regards to moving, lack of specialized work in countryside and couple of other factors. germany has patched their hospice staff shortage (not sure how effectively) with migrants, but japanese are way too racist for that. same thing happens in moldova, but you never hear sob stories about retired moldovans because theyāre broke and nobody cares, while moldovan govt canāt really do much about it (because broke) to degree that it has not just economic and demographic, but even strategic effects. whole lotta drs strangelove in there
What in tarnation is happening at adafruit?
https://blog.adafruit.com/2026/02/14/heres-our-first-gemini-deep-think-llm-assisted-hardware-design/
@o7___o7 @techtakes It was Arduino who emitted the llm generated circuit but I was tired and conflated two companies with similar names (whose products I donāt use) before going to bed. Now theyāre trying to throw a flame war. Pay no attention.
You handled that whole unhinged situation like a champ.
Seeing once respectable folks get trapped in this bullshit feels like watching my college pal smoke his first clove cigarette.
It seems like, for certain kind of geek, code LLMs sit somewhere between āThe One Ring but rubbishā and āmechanical meth.ā The users certainly feel powerful and productive, and god help anyone who hints otherwise.
You can always blame an LLM for confusing the 2 companies ;)
Limor Fried and I had a class together at MIT in 2001. This has no bearing on the present circumstances and offers me no real insight (anything I could say about our extremely limited interactions would amount to confirmation bias). Itās just the odd little factoid that comes to mind whenever adafruit Does Something Online.
theyāve jumped the shark a while back. ptorrone is relatively widely documented to go around harassing people (often using his partner as a shield excuse for doing so, just like this time), and a couple months back they sold to qualcomm
This person āCharlie Strossā
ah yes, that absolutely must be a pseudonym, and absolutely couldnāt be a real person
such a fucking weird bit
anyway yeah fuck adafruit. donāt buy their shit, there are other options (and often cheaper)
e: I derped on the who-sold, conflated the two for dipshittery. see subthread
Men on Mastodon who never snark on male founders, or CEOs, or engineers, or guys sharing their work, have decided to target Limor since she shared it.
hey mr. adafruit watch this, I can weaponise oppression olympics too, Iām a trans woman from the third world: fuck off with profiteering from your selling out to the planet-destroying plagiarism machine thatās proudly empowering ICE and the IDF. itās not your country that will pay the price for your meaningless carbon output to generate nonsense until it looks right. and you brag about it. pega teu fascism-assisted hardware design e enfia no cu. happy never to be an adafruit customer again. assholes.













