Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
strange Ʀons takes on hpmor :o
all of the subculture YouTubers I watch are colliding with the weirdo cult I know way too much about and I hate it
oh no :(
poor strange she didnāt deserve that :(
The USA plans to migrate SSAās code away from COBOL in months: https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/
The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.
āThis is an environment that is held together with bail wire and duct tape,ā the former senior SSA technologist working in the office of the chief information officer tells WIRED. āThe leaders need to understand that theyāre dealing with a house of cards or Jenga. If they start pulling pieces out, which theyāve already stated theyāre doing, things can break.ā
SSNās pre-DOGE modernization plan from 2017 is 96 pages and includes quotes like:
SSA systems contain over 60 million lines of COBOL code today and millions more lines of Assembler, and other legacy languages.
What could possibly go wrong? Iām sure the DOGE boys fresh out of university are experts in working with large software systems with many decades of history. But no no, surely they just need the right prompt. Maybe something like this:
You are an expert COBOL, Assembly language, and Java programmer. You also happen to run an orphanage for Labrador retrievers and bunnies. Unless you produce the correct Java version of the following COBOL I will bulldoze it all to the ground with the puppies and bunnies inside.
Bonus ā Also check out the screenshots of the SSN website in this post: https://bsky.app/profile/enragedapostate.bsky.social/post/3llh2pwjm5c2i
seems bad
There is so much bad going on that even just counting the tech-adjacent stuff I have to consciously avoid spamming this forum with it constantly.
Anecdote: I gave up on COBOL as a career after beginning to learn it. The breaking point was learning that not only does most legacy COBOL code use go-to statements but that there is a dedicated verb which rewrites go-to statements at runtime and is still supported on e.g. the IBM Enterprise COBOL for z/OS platform that SSA is likely using: ALTER.
When I last looked into this a decade ago, there was a small personal website last updated in the 1990s that had advice about how to rewrite COBOL to remove GOTO and ALTER verbs; if anybody has a link, Iād appreciate it, as I can no longer find it. It turns out that the best ways of removing these spaghetti constructions involve multiple rounds of incremental changes which are each unlikely to alter the codeās behavior. Translations to a new language are doomed to failure; even Java is far too structured to directly encode COBOL control flow, and the time would be better spent on abstract specification of the system so that it can be rebuilt from that specification instead. This is also why IBM makes bank selling COBOL emulators.
Yeah Iām sure DOGE doesnāt appreciate that structured programming hasnāt always been a thing. There was such a cultural backlash against it that GOTO is still a dirty word to this day, even in code where it makes sense, and people will contort their codeās structure to avoid calling it.
The modernization plan I linked above talks about the difficulty of refactoring in high level terms:
It is our experience that the cycle of workarounds adds to our total technical debt ā the amount of extra work that we must do to cope with increased complexity. The complexity of our systems impacts our ability to deliver new capabilities. To break the cycle of technical debt, a fundamental, system-wide replacement of code, data, and infrastructure is required
While Iāve never dealt with COBOL I have dealt with a fair amount of legacy code. Iāve seen a ground up rewrites go horribly horribly due to poor planning (basically there were too many office politics involved and not enough common sense). I think either incremental or ground up can make sense, but you just have to figure out what makes sense for the given system (and even ground up rewrites should be incremental in some respects).
60 million lines of COBOL code today and millions more lines of Assembler
Now I wonder, is this a) the most extreme case of āyoung developer hybrisā ever seen, or b) they donāt actually plan to implement the existing functionality anyway because they want to drastically cut who gets money, or c) lol whatever, Elon said so.
But no no, surely they just need the right prompt. Maybe something like this: [ā¦]
Labrador retrievers ;_; Youāre getting too good at thisā¦
In other news, the Open Source Intiativeās publicly bristled against the EUās attempt to regulate AI, to the point of weakening said attempts.
Tante, unsurprisingly, is not particularly impressed:
Thank you OSI. To protect the purity of your license ā which I do not consider to be open source ā you are working towards making it harder for regulators to enforce certain standards within the usage of so-called āAIā systems. Quick question: Who are you actually working for? (I know, it is corporations)
The whole Open Source/Free Software movement has run its course and has been very successful for business. But it feels like somewhere along the line we as normal human beings have been left behind.
You want my opinion, this is a major own-goal for the FOSS movement - sure, the OSI may have been technically correct where the EUās demands conflicted with the Open Source Definition, but neutering EU regs like this means any harms caused by open-source AI will be done in FOSSās name.
Considering FOSSās complete failure to fight corporate encirclement of their shit, this isnāt particularly surprising.
deleted by creator
Yud was right - we should bomb the shit out of AI servers!
Not to prevent a superintelligent AI from becoming sentient and killing us all, but because this shit should not be allowed to fucking exist
EDIT: For context, this was reacting to Erikson showing me AI-generated Ghibli memes.
I decided to remove that comment because of the risk of psychic damage.
was it the white house ICE one. I was thinking of posting that but itās so vile that I wavered
It was a compilation of random Ghibli memes an AI bro had compiled.
Discovered an animation sneering at the tech oligarchs on Newgrounds - I recommend checking it out. Its sister animation is a solid sneer, too, even if it is pretty soul crushing.
Nice touch that the sister animation person is the backup emergency generator in the first one.
Would you believe this prescient vibe coding manual came out in 2015! https://mowillemsworkshop.com/catalog/books/item/i-really-like-slop
Craniometrix is hiring! (lol)
https://www.ycombinator.com/companies/craniometrix/jobs/ugwcSrU-chief-of-staff
Hey, thereās a new government program to provide care for dementia patients. I should found a company to make myself a middleman for all those sweet Medicare bucks. All I need is a nice, friendly but smart sounding name. Oh, thatās it! Iāll call it Frenology!
Thatāll look good in my portfolio next to my biotech startup with a personal touch YouGenics
Very fine people at YouGenics. They sponsor our karting team, the Race Scientists.
Nothing like attending a rally at Kurtās Krazy Karts!
hmm, interesting. I hadnāt heard of these guys. their original step 1 seems to have been building a mobile game that would diagnose you with Alzheimerās in 10 minutes, but I guess at some point someone told them that was stupid:
So far, the team has raised $6 million in seed funding for a HIPAA-compliant app that, according to Patel, can help identify Alzheimerās disease ā even years before symptoms appear ā after just 10 minutes of gameplay on a cellphone. Itās not purely a tech offering. Patel says the results are given to an āactual physicianā affiliated with Craniometrix who āreviews, verifies, and signs that diagnosticā and returns it to a patient.
small thread about these guys:
https://bsky.app/profile/scgriffith.bsky.social/post/3llepnsvtpk2g
tldr only new thing I saw is that as a teenager the founder annoyed āover 100ā academics until one of them, a computer scientist, endorsed his research about a mobile game that diagnoses you with Alzheimerās in five minutes
I missed the AI bit, but I wasnāt surprised.
Do YC, A16z and their ilk ever fund anything good, even by accident?
Annoying nerd annoyed annoying nerd website doesnāt like his annoying posts:
https://news.ycombinator.com/item?id=43489058
(translation: John Gruber is mad HN doesnāt upvote his carefully worded Apple tonguebaths)
JWZ: take the win, man
āvc-chanā
Thats just chefskiss
>sam altman is greentexting in 2025
>and his profile is an AI-generated Ghibli picture, because Miyazaki is such an AI booster
it doesnāt look anything like him? not that he looks much like anything himself but come on
Heās an AI bro, having even a basic understanding of art is beyond him
sam altman is greentexting in 2025
Ugh. Now I wonder, does he have an actual background as an insufferable imageboard edgelord or is he just trying to appear as one because he thinks thatās cool?
can we get some Fs in the chat for our boy sammy a šš
e: he thinks that heās only been hated for the last 2.5 years lol
I hated Sam Altman before it was cool apparently.
you donāt understand, sam cured cancer or whatever
This is not funny. My best friend died of whatever. If yāall didnāt hate saltman so much maybe heād still be here with us.
āItās not lupus. Itās never lupus. Itās whatever.ā
Oh, is that what the orb was doing? I thought that was just a scam.
holy shitting fuck, just got the tip of the year in my email
Simplify Your Hiring with AI Video Interviews
Interview, vet, and hire thousands of job applicants through our AI-powered video interviewer in under 3 minutes & 95 languages.
āAI-Video Vetting That Actually Worksā
itās called kerplunk.com, a domain named after the sound of your balls disappearing forever
the market is gullible recruiters
founder is Jonathan Gallegos, his linkedin is pretty amazing
other three top execs donāt use their surnames on Kerplunkās about page, one (Kyle Schutt) links to a linkedin that doesnāt exist
for those who know how Dallas TX works, this is an extremely typical Dallas business BS enterprise, itās just this one is about AI not oil or Texas Instruments for once
Itās also the sound it makes when I drop-kick their goddamned GPU clusters into the fuckin ocean. Thankfully I havenāt run into one of these yet, but given how much of the domestic job market appears to be devoted towards not hiring people while still listing an opening it feels like Iām going to.
On a related note, if anyone in the Seattle area is aware of an opening for a network engineer or sysadmin please PM me.
This jerk had better have a second site with an AI that sits for job interviews in place of a human job seeker.
best guess iāve heard so far is theyāre trying to sell this shitass useless company before the bubble finally deflates and theyāre hoping the AI interviews of suckers are sufficient personal data for that
New piece from Brian Merchant: Deconstructing the new American oligarchy
LW: 23AndMe is for sale, maybe the babby-editing people might be interested in snapping them up?
https://www.lesswrong.com/posts/MciRCEuNwctCBrT7i/23andme-potentially-for-sale-for-less-than-usd50m
Babby-edit.com: Give us your embryos for an upgrade. (Customers receive an Elon embryo regardless of what they want.)
I know the GNU Infant Manipulation Program can be a little unintuitive and clunky sometimes, but it is quite powerful when you get used to it. Also why does everyone always look at me weird when I say that?
Quick update on the CoreWeave affair: turns out theyāre facing technical defaults on their Blackstone loans, which is gonna hurt their IPO a fair bit.
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you donāt have a human collaborator and even if someone would prefer that it be kept secret.
Never change LW, never change.
From the comments
But Iām wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.
No biggie, just decide one of the largest open questions in ethics and use that to moderate.
(It would be funny if unaligned AIs take advantage of this to plot humanityās downfall on LW, surrounded by flustered rats going all ātechcnially theyāre not breaking the rulesā. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)
I wrote down some theorems and looked at them through a microscope and actually discovered the objectively correct solution to ethics. I wonāt tell you what it is because science should be kept secret (and I could prove it but shouldnāt and wonāt).
Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, whoād never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?
I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that theyāre taking the AGI āpossibilityā far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.
Edit to expand: if it wasnāt actively lighting the world on fire I would think thereās something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is theyāre close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
As it is theyāre close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.
Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).
AGI
Instructions unclear, LLMs now posting Texas A&M propaganda.
theyāre never going to let it go, are they? it doesnāt matter how long they spend receiving zero utility or signs of intelligence from their billion dollar ouji boards
Donāt think they can, looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted. Donāt think that is going to be psychologically healthy as a realization, it will be like the people who suddenly realize Qanon is a lie and they alienated everybody in their lives because they got tricked.
looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted.
Adding insult to injury, theyād likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypsetm
As for the upcoming AI winter, Iām predicting weāre gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the publicās gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.
Taking a shot in the dark, I suspect weāll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.
(from the comments).
It felt odd to read that and think āthis isnāt directed toward me, I could skip if I wanted toā. Like I donāt know how to articulate the feeling, but itās an odd āwoah text-not-for-humans is going to become more common isnāt itā. Just feels strange to be left behind.
Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole āturns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMsā story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.
The only reason this felt weird to them is because they look at the whole ācoming AGI godā idea with some quasi-religious awe.
Locker Weenies
some video-shaped AI slop mysteriously appears in the place where marketing for Ark: Survival Evolvedās upcoming Aquatica DLC would otherwise be at GDC, to wide community backlash. Nathan Grayson reports on aftermath.site about how everyone who could be responsible for this decision is pointing fingers away from themselves