And the real maddening part is that search engines have been so enshitfied to make way for AI that’s wrong like 9/10, so you’re forced to rely on it for answers because if you try google, the snake wraps around and eats it’s own tail giving you an AI answer! 
I’m absolutely baffled by it as someone who started their college career in computer science before switching majors. I was never the best programmer, yet it seems so ass-backwards to me modern programmers aren’t writing pseudo-code and working things out on paper. I wasn’t in school that long ago. Did things really change that fast? Are people not doing formal logic anymore? Do they even learn binary and hex? Just what the fuck is happening to this field.
deleted by creator
this. llm code is the silver bullet for “idea guys”
I’m imagining a comedy with this dialog…
“Am I a programmer? A lowly programmer? Of course not! I’m an ideas guy.” As the plot unfolds - it turns out the guy has no idea how to do anything. All he does is enter AI prompts and then lie that he has yet another fantastic idea.
I was never the best programmer, yet it seems so ass-backwards to me modern programmers aren’t writing pseudo-code and working things out on paper
Not a programmer, but as someone whose master’s degree is filled with “write 30 pages worth of documentation before starting a project” when you are actually working in the real world, half that shit goes out of the window. So, I can definitely see how a lot of people are not writing pseudocode and instead brute forcing a bunch of things.
I was self-taught programming before I started my college career similarly (also switched majors except I dropped out) and I don’t usually do pseudo-code. I guess I kinda do in my head or write out a plan for how it should work. I also don’t usually do very big projects either. I’ve tried OS dev, but I have a hard time expanding beyond the tutorials on the wiki and keeping things organized and actually working. Mostly now I just write. (switched majors to literary studies)
I never went to school for programming but I’ve worked as a dev for almost a decade. I do know binary, hex, and formal logic, but almost never use them. I’ve never written code on paper, but I’ve written informal flow diagrams on a whiteboard and on excalidraw. And my pseudo code is usually just writing out actual code where the method doesn’t exist yet. But I’ve never written pseudo English out to plan what I was doing. I’ve talked about code in pseudo code speak but mostly when other people are piloting on screen share.
The thing is, our entire field is bad at what we do. For most of the software the cost of error is very low, and for a long time it was a very lucrative field that attracted a lot of people who were really bad at coding. So coding with AI is not significantly different from coding without AI, it’s just that there’s now a much faster, and much less ethically acceptable way of producing code.
50% of developers have less than 5 years of experience and the number of new developers just keeps growing too. We’re a profession of amateurs with companies poaching the oldheads out from underneath each other.
The free models are much worse than the $500 per user/month enterprise ones. I have seen these be able to generate working features first hand at work, and I cannot deny that certain models are capable of implementing features when appropriate requirements are provided. To claim anything else would be to deny what I have seen with my own eyes.
However, therein lies the trap. Just because it is capable of achieving the provided task in one instance, doesn’t mean that it always provides an appropriate answer or solution in all cases.
But those who have initially used it successfully tend to start believing its output uncritically. I’ve noticed this on myself when I tried it at work, and I think this is basic human, heck, even animal condition. You are naturally inclined to trust an entity that initially provides you with beneficial output. You become less critical, as the output often sounds informed and convincing, and in many cases provably works as well (especially when a robust testing framework exists inside the project. its only through unit and integration tests that these AIs can even reliably implement features).
But this leads to an increasing reliance on the tech, and you stop being capable of arguing why the solution it generated works. You have to put in active effort to question what it’s doing, and you have no way of knowing whether it’s telling you the truth or lies, because it has no motive, and researching the facts can take so long that it completely defeats the point of automation. So it ends up being rather self-defeating in many cases, and can leave you less capable of solving problems yourself.
I think the most useful application for it personally is to use it for debugging – feed it a cryptic error message, and it will usually generate an answer that, while not necessarily accurate, can give you more pointers to find the true answer, much better than most search engines can.
I mean deepseek will make you working programs for 20 cents of tokens sometimes if the requirements are straightforward and it’s nothing too exotic.
I have a very close friend who is an engineer for programming(idk what the title is rn) at a very large company.
He says he has managed to keep one or two codebases “AI free” but when I asked if he has to review any AI code he said it’s completely unavoidable and everyone uses it now. He’s proud of the fact that they still require the coder to actually review the AI generated slop before passing it off to him.
It’s bleak
This is such a key point you make—quality of search results and available info to use to solve a problem have degraded so far that you almost have to rely on web search enables AI to do what you used to be able to do on your own, and in both cases now you have to engage a lot of extra effort in trying to discern if the information is at all useful.
And like you say, the situation will only recursively get worse as the two feed on each other further destroying informational value.
Very much this - I used to rely a lot in tutorials, devlogs etc to learn new patterns etc, but now search is so bad that LLMs are basically the only game in town
If it’s not new tech (never use new libs) just add “before:2022” or such to the search
People are going to learn about Socialism from these tools. Having websites with easily laid out information debunking the common talking points is more important than ever.
With coding it’s easier to deceive yourself that the AI is doing a good job. There are tons of tools out there that can detect various kinds of problems in code and the AI can call those tools and change stuff until the warnings go away. So the code might look alright on first glance. Then half the time people don’t even understand the code they wrote themselves so they just look at changes across 50 different files and be like: fuck it, how much do I really care if this company goes up in flames?
Its fine for boilerplate simple programs. However, it will often make mistakes even for those, so you have to know what you are looking at. Still saves time, though idk if the actual energy usage etc., is actually saving you time and money without free money existing.
However, I have seen people write big programs with it and then be surprised that they don’t work. Even more worrying though is when they do work, but then I walk through whoever wrote it and they cannot explain how or why it is working.
Its real engineering logic.
though idk if the actual energy usage etc., is actually saving you time and money without free money existing.
llm end-user energy consumption is pretty low. probably depends on the provider rates and your dev salaries.
Yeah but inference cannot exist without the prohibitively expensive up-front cost of training. And of course the larger the model the more costly the inference. That’s why you read stories like “new trend in SV: pay in tokens.” Opus 4.6 is gonna mop the floor with a 2B param model designed to run on an edge PC, but the cost of getting to the point that it can be used, and actually using it, is still very high.
Inference is not that cheap. It is cheap when compared with training. Try running LLMs on a laptop and watch how quickly your battery is sucked dry. This is still the case when you have a GPU.
i’m probably using more power to microwave my pasta dinner
I’ve used it to create some simple scripts to do some tedious shit that I didn’t feel like coding myself but nothing serious or professional. For example:
“Here is a big file that has a bunch of data in it but I only need points X,Y,Z, formatted in a JSON which I have provided an example of. Write me a simple python script to do that.”
Works okay for that stuff. Always desk check it with edge cases.
LLMs do really well on short bash scripts, but often presume a lot of things about your system that result in having to rewrite it anyway.
It can help with tedious but relatively non complex work or maybe speed up some exploratory work, anything else and it’s going to make ridiculous mistakes. It’s a useful tool occasionally but nothing I’d lose sleep over if it disappeared.
The AI is right. Just delete that shit and install freeBSD.
Anything that is even remotely a novel problem AI can’t solve. It doesn’t have the training data for your specific problem. At best it’ll do a web crawl for you and summarize its findings.
If you want to really pull your hair out take a look at AGENTS.md or SKILLS.md. State of the art agentic coding practices: glorified README.md files. (the ai frequently doesn’t bother to read them).
I will say one thing nice about LLMs: they are fairly “human” in the sense that they error in familiar ways. In a way AI is automated human error.
It’s not that different than using Stack for parts or boilerplate code (since AI probably just stole from that anyways). So you still need to know what’s going on unless you literally just keep throwing prompts at every error for 3 hours until it magically works.
I use AI mostly to troubleshoot all of the vague errors that come out of python or SQL, not to write my entire code. It’s a [relatively shitty] tool, not an ‘I Win’ button that everybody claims it is.
Similarly, I like having it summarize search results and I can click into the actual relevant links. But yea it’s pretty garbage most of the time. I’m definitely on team ‘fuck ai’; I lived without it before, I can live without it again
I still take a peak at /r/selfhosted sometimes and the situation is dire. The mods have completely given into the slop trough.
Even on /c/selfhosted a lot of projects were being advertised that were blatant AI slop
They do. Most programmers think they’re above average (there were actual statistics on this, maybe from stackoverflow survey) and are mediocre enough that they find it useful/faster long term.
I’m statistically likely to be mediocre myself, but I would rather try to improve than relying on LLMs. Every single coworker I work with who is actually above average hates the forced AI usage.


















