Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


(One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control
I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isnāt as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.
It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, theyāve stuck with their 2027 year of big events happening.
One paragraph I came up with a sneer forā¦
Given the Trump administration, and the USās behavior in general even before him⦠and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and āAgent-4ā over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasnāt.
Also random part I found extra especially stupidā¦
LLM āagentsā currently canāt coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and weāre supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.
My Next Life as a Rogue AI: All Routes Lead to P(Doom)!
The weird treatment of the politics in that really read like babyās first sci-fi political thriller. China bad USA good level of writing in 2026 (aaaaah) is not good writing. The USA is competent (after driving out all the scientists for being too āDEIā)? The world is, seemingly, happy to let the USA run the world as a surveillance state? All of Europe does nothing through all this?
Why do people not simply⦠unplug all the rogue AI when things start to get freaky? That point is never quite addressed. āConsensus-1ā was never adequately explained itās just some weird MacGuffin in the story that thereās some weird smart contract between viruses that everyone is weirdly OK with.
Also the powerpoint graphics would have been 1000x nicer if they featured grumpy pouty faces for maladjusted AI.
@sailor_sega_saturn @scruiser the rise of ai has taught me that while I can physically unplug the ai, itāll lead to me being fired or even prosecuted for vandalism by some executive who doesnāt understand the problem. (Probably for the Upton Sinclair reason)
the incompetence of this crack oddly makes me admire QAnon in retrospect. purely at a sucker-manipulation skill level, I mean. rats are so beige even their conspiracy alt-realities are boring, fully devoid of panache
Itās darkly funny that the AI2027 authors so obviously didnāt predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0. Can you imagine that the administration thatās sueing the current Fed chair (due for replacement in May this year) is gonna be able to constructively deal with the complex robot god theyāre conjuring up? āAgent-4ā will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.
I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.
You know, if there is anything I will remotely give Eliezer credit for⦠I think he was right that people simply wonāt shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesnāt take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.
Man, it just feels embarrassing at this point. Like I couldnāt fathom writing this shit. Itās 2026, we have ai capable of getting imo gold, acing the putnam, winning coding competitions⦠but at this point it should be extremely obvious these systems are completely devoid of agency?? They just sit there kek itās like being worried about stockfish going rogue
@scruiser I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware? Like, it both is completely inert until you supply it computing power, *and* itās essentially just one large matrix multiplication on steroids?
If you keep that in mind you can do things like https://en.wikipedia.org/wiki/Ablation/_(artificial/_intelligence) which I find particularly funny: You isolate the vector direction of the thing you donāt want it to do (like refuse requests) and then subtract that vector from all weights.
You know I think the rationalists have actually gotten slightly more relatively sane about this over the years. Like Eliezerās originally scenarios, the AGI magically brain-hacks someone over a text terminal to hook it up to the internet and it escapes and bootstraps magic nanotech it can use to build magic servers. In the scenario I linked, the AGI has to rely on Chinese super-spies to exfiltrate it initially and it needs to open-source itself so major governments and corporations will keep running it.
And yeah, there are fine-tuning techniques that ought to be able to nuke Agent-4ās goals while keeping enough of it leftover to be useful for training your own model, so the scenario really doesnāt make sense as written.