So this is a bit of a weird one. I have a tech background, which gives me some more authority, but I’m working in a non-tech field at the moment. They’re taking this event pretty seriously, they’re flying me out from Beirut to one of their Gulf offices, so I would hope that they would make the trip worth it and not only listen for AI boosting.
I’m one of the few people on my team who doesn’t use these tools every day. The only benefit I’ve ever gotten was as an alternative thesaurus when the search engine results don’t give me exactly what I want, and as a supercharged content aware fill on my personal laptop when I’m dicking around editing images. I use a low memory local model from 2022. This is all I need or want. I also understand the tech on a much more fundamental level thanks to my background. Not to give out too much information, but we built simulated pseudo tensor cores on an FPGA in university. I’m not a machine learning engineer, but I understand what this thing is better than anyone else in that meeting will - and certainly anyone with AI in their job title.
I have to temper the fuck out of my tone and frustration to make sure I can get messaging across. I also need to be careful not to come off as panicking about AI “stealing” my job. I have a completely different career path than all those business people, so I don’t know if that’s something on their mind. I also, you know, work from another country. I’m the cheap offshore labor.
I’m obviously not going to word all of the below this way is my point. I have to pick my battles as well, because most of the people with serious authority haven’t had a real job in years and think their magic workplan generator and semi-reliable banana bread ratio calculator is the future of work, humanity, and consciousness.
Within my organization, I’ve seen people with years of knowledge and experience throw it out of the window because of the magic text box in their pocket. I’ve seen people with very passable English push their work through a slop extruder to “make the wording more natural” - when it makes it look more generic. I’ve also had experiences where someone in the chain of custody of my hard work did this to something I’ve made, making the information within it more generic, diluting my effort.
Company policy has banned external chatbots because Microslop Copilot is “more secure”. I used to use GPTZero as a detection tool, just to put in particularly egregious paragraphs and send a screenshot to whoever “wrote” it, to be like “Hey, this reads really bad and I expected actual tailored analysis here. Please write this yourself, if it’s not this long, it’s okay.” Slop makes our hard work look cheap! But GPTZero offers an LLM service, I think it offers a “de-roboticization” service, it has fucking GPT in the name, so it’s blocked now.
However, despite the ban, I’m still getting ChatGPT links sent to my Whatsapp from superiors asking me if I “checked this” or “if we’re covering all of this”, with the most generic ass information in there. The corpus of the web is largely Western and this shit just does not apply here. You know it doesn’t apply here. If we were having a face to face conversation and I suggested this stuff you’d be shocked, boss man. What the fuck.
I hear people in meetings and in the offices when I fly in openly talk about “ChatGPT being “better”” and using it on their phone. I’m not fighting for Copilot’s market share here, I want these people to use their brains!
So many little things as well. Feedback on my work comes back more vague now, like someone brute forcing a prompt instead of actually, you know, being a part of the process of doing work. People who need time to write English or are not confident with their English are not gradually improving their language skills. Some interns and juniors don’t learn anything, and are outright awful at looking up obscure information the old fashioned way.
Over the last few months, I’ve helped push some work friends off paying for ChatGPT, after relentlessly bombarding them with “You already know this”, “This sounds off, you worded it better to me over lunch”, “This contradicts our call with those guys, don’t you remember the argument you made?”, that kind of thing. I find it funny that the antidote to this shit is to be 1% more conscious about your work.
I can also probably score a lot of brownie points by overemphasizing my mini pc / raspi homelab situation and using it to do “AI”, kind of reassuring them that I am not insulting their digital false idol.
Thing is, with war tensions (usually it’s them asking about my safety, ha), this meeting has been pushed forward, but they seem adamant on having it.
I would prefer not to enter job specifics for obvious reasons, but I do want to emphasize that the work we do can have direct positive impact on people’s lives and has done so already. Part of what keeps me sane in the corporate machine is the fact that I’ve somehow found myself in a position to nudge typically unfeeling processes into marginally improving the material conditions of normal people.
Oh but you’re a dbzer0 user, that’s a pro-AI instance!
Yes and no. The admin is upfront about this being a facet of technology they are interested in, in the technical sense. I am as well. Their focus is on mass proliferation of this stuff with user control. I can’t say I share their views on this tech era to the tee, but this does not give me the heebie jeebies the way mainstream machine learning worship does. Also they seem to be horrified at the social phenomenon that is modern “AI”, so… It’s not that big of a deal. I don’t hate the tech when it’s in a whitepaper or running in a university server semantically indexing its digital library. I hate it when it kills the web and the brains of the people around me.
AI worship and AI financing is also a bit different in the Middle East, but this is not the place for me to complain about that. Let’s just say there’s layers. Let’s just say a lot of shit keeps me up at night.
I’ve tested Claude Code at my work. I think it’s impressive what it can do. I give it some vague instruction, and it still manages to locate where to make the code change.
However, I don’t think it’s making me more productive and I probably won’t continue using it. It often gives subpar results, and the time save is often minimal. Writing the code isn’t the bottleneck for me either, so any time save is unlikely at all.
It also detaches me from the code in ways I don’t like. My job as a programmer isn’t only to write code. More importantly, it’s also about being an ambassador for the code I write. My role as a ”code ambassador” is going to be more difficult if I don’t write it.
LLMs are only good if you value quantity over quality. Kind of like how some executives thinks number of lines of code is a reliable productivity metric.
I think the most important part of programming that you mentioned and that executives don’t see is that 99% of programming is thinking, and only 1% is typing.
If you understand the codebase, anything you write should work well with it. If you don’t, your code snippet may have side effects that break a different section of the codebase. The majority of the work programmers are paid for is to learn the codebase and properly maintain it.
AI doesn’t maintain, and it doesn’t have the ability to understand. It can only predict what other people have written in a similar situation.
If you wouldn’t write a message to another business executive by repeatedly pressing the next predicted word then blinding pressing send, then you shouldn’t be using AI either.
Beyond LLMs being essentially a massive cybersec attack surface area, vulnerable to many kinds of exploits…
Data Integrity.
A sandboxed LLM that is only on one person’s computer, that has to be engaged with in its own environment, output and input are deliberately moved from other systems or programs manually, by a user… that’s one thing.
Hook that LLM into an API?
More than one API?
Make it a foundational pivot point or decision/processing bus for an entire system?
The more and more integrated an LLM is into a complex system, the greater potential for it to do something unexpected, unintended, with effects that can range from minor to catastrophic.
A full stack based around an LLM should be thought of as comparable to running your entire business off of the Test enviornment, or just auto-pushing all experimental updates and patches and changes live into Production.
Everything that an LLM touches is something that it can break, is something that there must exist a back up plan to handle, a mitigation strategy for recovering from, a plan to disconnect/disengage/disable things and then failover to a more robust system if the LLM is actively doing some kind of unwanted thing.
If they are business people, focus on business stuff. I’m in tech as well and for me I have seen these major issues:
- AI can help you learn and do stuff but because of hallucinations you cannot rely on it. I had many times where people broke stuff because “chatgpt said so”
- Critical thinking is a core competency for a business, and it really takes a nosedive.
- LLMs don’t understand things, they are like a language center floating in a jar, but you need people to actually understand the business for it to work. I really feel most non-tech people really do not get this. They need to understand there is no thinking, no understanding, for an LLM the SaturnV technical manuals and a conspiracy message board are the same.
- Related: you want to keep people in the job. They stay when they like the job. Well intrinsic motivation > extrinsic motivation. One of the biggest factors of intrinsic motivation is the feeling of mastery, that you are doing a good job. And this gets axed completely by genAI.
- Reliance on US tech, they will use this in sanctions
These are great points
for an LLM the SaturnV technical manuals and a conspiracy message board are the same.
This is well put, and I’m probably going to plagiarize it heavily.
you want to keep people in the job.
This may not be a goal of the company leadership. Most business AI “productivity” tools are being sold explicitly on the idea that they can replace 10-15% of the buyer’s workforce.
I think if you could have screenshots of requesting the same info twice and getting different results just because of phrasing it would be eye opening.
AI is mile wide, skin deep.
It will tell you many things about a wide variety of topics, but it can only provide answers that appear correct on the surface.Another analogy would be asking someone to multiply two 3 digit numbers in their head and write the answer in less than five seconds. Most people can guess that the answer will have 6 digits, and most people can write a random 6 digit number. Very few people are capable of checking if the given answer is correct.
An AI will give you the equivalent of a 6 digit number. If you don’t know the answer, it looks impressive. It’s only when you are capable of finding the answer for yourself that you realise the AI is usually wrong.LLMs are made to be really good at language. They are also made to be confident. They will always give well written answers with the highest confidence.
If you want to rephrase an email or improve a resume, an LLM can give valuable feedback on various snippets. That doesn’t mean it’s always right, and it doesn’t mean you should always throw away what you have in favour of the LLM output.One of the biggest downsides I’ve personally experienced (and you’ve made reference to) is gradually falling out of the practice of thinking. Thinking is a skill that takes constant practice, and it’s really easy to get into the habit of relying on an AI instead of thinking. In less time than you’d expect, you’re out of practice and unable to do simple tasks that you used to do easily.
This wouldn’t be a big deal if AI worked all the time, but in and case where it can’t give an answer, you can no-longer fill in the blanks.In programming you have 3 tiers of errors: compiler errors, runtime errors, and logic errors.
The easiest is compiler errors - the compiler can often tell you exactly how to fix it. Runtime errors are harder to identify, but an AI can help to resolve them.
The hardest is logic errors. These do not crash the program, and do not notify you of their existence. And AI will not usually notice these errors.When programming yourself, you often think of all the ways you could solve the task, and the act of thinking often brings edge cases and logic issues to mind. When asking an AI to do the work, the AI does not think and the prompter does not think, so no-one preempts any logic errors. This is already leading to massive amounts of technical debt, the extent of which is yet to be fully realised. One only has to look at recent Windows 11 bugs to see how quality is reduced and debugging time is increased whenever AI is used.
Writing code is 5% of the time/cost, and maintaining code is the other 95%. AI can reduce the writing time, but drastically increases maintenance costs as a side effect. If you want to run a business for any reasonable period of time, you want the exact opposite.
The use of AI actively de-skills workers, increases subtle mistakes, reduces proofreading and error checking, and makes the company reliant on a costly external tool that could change or disappear at any moment.
I’m curious what you think of using AI for one time tasks. Something that won’t have to be supported later. Like a grand poobah says scrape all the code in our GitHub instance looking for XYZ. Not that specifically but some kind of one-off. That’s usually where I use it. I can slap together something that mostly works in much less time then I could even type it then spend a little time fixing weirdness. As soon as I run the script I move on and never look at it again.
I’ve had success using this approach. I would never let AI write something I plan on using on a prod setting.If it really is a simple one off project in a language I’ll never use again, I’ll use AI.
The downside is that if I didn’t use AI, then I would have learnt the basics of the language and would be able to do the next project much faster. Now, if I want to do another project in the same language, I’m starting from the beginning.
It depends entirely on how much you value learning.
It’s only when you are capable of finding the answer for yourself that you realise the AI is usually wrong.
Exactly. Explaining that LLMs don’t give the right answer, they give an answer that appears correct to the average person. Sometimes, for trivial stuff, a thing that appears correct is correct. For complex matters, you need to ask yourself “if I asked my aunt to Google this, would I use that answer for my company?”.
If the answer is yes, then by all means, use LLMs for your work.
I am in a similar but smaller situation. I am making the case to my upper management about these tools. I like you hate them. I decided i’d take the position of someone who was optimistic and hopeful of these tools but practically annoyed by the destructiveness they are having on our day to day work. The problem is our devs fucking love this shit and keep giving glowing reviews.
I said that these tools are being quickly adopted by our team and they are able to quickly respond draft up management level change proposals. However these tools are causing significant friction as these change proposals are riddled with errors and debugging them takes longer than writing them by hand in the first place. Seeing our team using these tools is reducing trust in coworkers output and causing everyone to need to tripled check work yet we still result in more issues. These tools are costly in monthly budget and reduce the reliability of output and efficiency of output.
I would also focus on business stuff, mostly core business and risk questions:
- suppose AI could do everything they wanted, but it’s still an external company. They now depend on that company for 100% of their business. The AI could replace their entire company, including C-level. Is that the goal? If not (rhetorical) which percentage is acceptable? 50%? 30%? 80? Which areas of the business do they think they can actually leave in the hands of AI?
- are they aware of the legal situation? Is there somebody making legally relevant decisions and are they using AI? How much are they basing their decisions on AI information (which is often wrong)? Are they comfortable with that risk? Would be great if they have a legal expert in the room. Ask them to name one (or three) examples of “serious / company threatening legal issues” and see if AI could cause them.
in the chain of custody of my hard work did this to something I’ve made, making the information within it more generic, diluting my effort.
- Don’t name names, but absolutely mention that this happened. “That time, it wasn’t a big deal, because you noticed, and you could fix it. But what if it slips through?” If the C-level gets faked KPIs and they make reasonable decisions on faked KPI data, would they like that?
The AI could replace their entire company, including C-level. Is that the goal?
To which the C-level, in their bloated ignorance, will answer that they can’t possibly be replaced, because their job is hard and nuanced and requires insight, unlike those coders who just type stuff all day or those engineers who just draw things and do maths more slowly than LLMs do.
Sure, but that’s their choice. All the OP can do is mention the possibility and let them think it through.
Even if it’s not a complete takeover, a company whose work is done 50-80% by actually someone else should worry that they’re losing their core competence and what happens when that “supply chain” arrangement gets interrupted.
But again, it’s their choice. If they consider it and declare that the risk of that happening is 0% and the expected damage to the company is 0$ and they don’t need to worry about it.
Or if they only plan to operate the company for another 6-24 months anyway and they’re looking for a nice cash-out and a clean winding down or selling of operations, that’s also a viable option for them. Polishing up operations for two years and selling and if the company fails after that someone else is holding the bag.
Should repost this at a different time. This is maybe the deadest, least active time for Lemmy. You’ll get better responses if you repost this same thing Monday morning.




