

lol, but not completely inaccurate. “You are a pedantic code reviewer. Review the MR you just created”
Just a regular Joe.


lol, but not completely inaccurate. “You are a pedantic code reviewer. Review the MR you just created”
It’s all about the ability and willingness to judge. If you can judge good code, you can write good code, and you can guide juniors and AI to write good code. If you give up your judgement, all is lost.
I haven’t done 9-5 programming (edit: and especially not on a single codebase!) in many years. My job slowly evolved to be more architecture and collaboration (between product teams, architects & dev teams), with the odd code to kickstart a project or diving in to help resolve an issue.
In that respect, it’s great to be working more directly with code again. Both in languages I know well, and those I am learning. Most teams around me are embracing AI tools, and coordination is easier.
As a general risk to cognitive ability - it absolutely comes with risks. But there have always been people who want to understand more, and those who are happy to solve a problem without really understanding it (often producing poor code / laying landmines, costing everyone time)
We now have more tools at our disposal to build & maintain quality systems. Those who learn to use the tools well and understand the problem domain will outperform those who don’t. Those who can’t (or won’t) will slowly be pushed out. And businesses who can’t differentiate will dig their own graves.
If you can, then do it.
I work faster with CLI coding agents though. Previously I would spend less time designing and more time iterating on code as ideas evolve - this was natural, as coding was slow enough to let me think through the design as I went.
Now, I spend more time on design, and then have mini-sessions to implement it. I prefer scaffolding codebase + types, then implementing it feature by feature.
The more I use AI, the better I am getting at this flow, and the better the resulting code is. New projects get completed in days, and now come with good docs.
Where it really adds value nowadays (but not in the beginning) are cross-project changes and coordination. UI changes + API changes + DB changes + Workflow changes, different codebases & languages… Feature design & Implementation plan, discuss and agree, then implement it across all projects, each with their own norms & constraints.
They can increase the efficiency in developing quality software. For most use-cases, we still want active human collaboration and review… describe the intent, discuss approaches, let it implement changes one by one, fixing/reviewing as you go. More like pair programming, with fewer typos.
But the moment you just wave through some change without thinking, you’re not doing software engineering anymore.
It’s similar when writing… invest the time, think through it and discuss it, review as you go, and you can get quality out of it. I hate it for normal email/letters, but I love it for docs, where everything can be cross-referenced, queried, updated more consistently and faster than I could manage it myself. Same caveat applies.
DLSS, AI? Downvote. You should be upscaling each frame with gimp or krita and a drawing tablet. Tut tut tut. /s


spits on ground, squints reeeeaaal slow
Now listen here sonny… we don’t take kindly to none of that AI business around these parts.
We is simple, God fearin’ folk, raised on sweat, dirt, and good honest labor, not all of that fancy machine learnin’ contraption nonsense.
Ain’t no place for thinkin’ machines where a man’s meant to use his own two hands. So I reckon I’ll mosey on over and downvote this here post myself, nice and proper.
If I were Iran (I’m not, ftr) and China (also not), I would only offer safe passage to ships flying non-US-buddy flags and insured by a Chinese shipping insurer. In turn, the Chinese insurer insists on indemnity from sanctions until the Iran situation is concluded at the UN, to restore worldwide economic stability.
Safe passage for the rest would be pending the ceasefire & reparation negotiations (to be held in a public UN forum in NYC).
I shudder at the thought.
a distinct lack of sufficient push-back to use established packages and libraries
Yes, but OTOH there is the terrible tendancy of some developers to use every package under the sun, creating integration pain. If the technical solution is well defined or simple enough (eg. add particular http headers, min/max/isEven functions), then let the agent just do it instead of pulling in a potential security risk. If it’s finicky and error prone (eg. JWTs, crypto), then use the established libraries…
The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.
Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.
Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.
I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.
On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.

There is still value in understanding how things work under the hood, even if it’s not something you will do every day. Although I agree that this particular tail-eating example may be hard to pull off.
On a general note: personalized learning is already one of the most human-positive uses of AI. Sure, we’d all prefer an infinitely patient and all knowledgable human (with cookies, milk and hugs) at our beckon call, but as we’re short on qualified staff due to world fucked-up-ness, supplementing it with inexpensive tailored learning tools seems like a good thing.
It’s only a problem if they claimed it as their own or it didn’t add value, AND it wasted your time as a result.
Sometimes the experts just know how to search more effectively in their domain (which nowadays is increasingly using the right context/prompt with some AI, and formerly known as Google-Fu before google search turned to shit)
To be genuinely helpful and polite, they’ll do a little legwork to respond personally and accurately… others might be super busy, or just dicks who don’t respect you or your time.
Try not to be that dick yourself, though. If you are asking someone for help, show your work and provide relevant info so they don’t waste their time.
Nobody says to blindly trust it…
It is about respecting everyone’s time…
Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.
AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.
We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.
Sure… copy & paste is copy & paste.
However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.


Some, sure, and those typically harshly. Similar to the USSR & GDR, many party members were just ordinary people, where party membership was necessary or useful for their work life, and supporting the wrong party could cost you everything.


Luckily east germany refused to be stained, and instead deposited all the blame and collective guilt with west germany.
“The war? That was their fault… over there” says the former nazi member pointing westward while biting into a Spreewalder Gurke during his lunchbreak from a cigarette factory, where he serves as Cultural Officer.


If your company has an enterprise/privacy agreement with Adobe, it might be considered addressed, similar to the millions of companies using Microsoft 365 and Sharepoint.
If, OTOH, it’s a “free” feature of Adobe, it could be eating your company’s data without constraints.
If the latter, let us know your company’s name so that we can avoid it.
Business Plan now has the option for subscription-free Codex-only users with PAYG pricing, and configurable limits.
Not quite sure about enterprise – they might be doing an Anthropic there, where enterprise tokens are 100% PAYG.
For normal Plus and Business plans: Rate limits for the 5 hour & weekly windows remain, similar to before. The usage calculations will change a bit though.