However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
LLMs do not add anything of value to bug reports, they add unecessary padding requiring me to filter out the marketing speech to get down to the issue. I would much rather have the raw brain dump of theirs.
If somebody sends me their ChatGPT text I now ask them to send me their prompt instead so I don’t have to waste my time on their lengthy text that has the same amount of information as the original.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Being coherent is rarely the problem in bug reports, it’s the user not properly typing out what the actual issue is.
I have gotten bullet point list bug reports that read like they were written by an insane person that were more useful than a nicely written ChatGPT message with 0 information in it.
Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.
I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.
On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.
It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
I believe it’s just complexity and token/compute usage.
You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).
It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.
AI is pretty much possible, we are thinking about it the wrong way.
We are expecting AI to have the 3 bests of both worlds.
High I/O ability : we have that from computers
Determinism and Correctness : computers always had a high level determinism, never correctness because a computer does not know what is correct[1]
Intelligence and thought : intelligence is a perception. AI will always have a lower depth of thought than us as long as it is dependent upon us
So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.
Of course, I don’t mean “determinism” in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can’t get the kind of logical determinism that you expect from normal computer operations.
A dumbed down example to get my thoughts across:
You can use either of a + b or ADD(A,B) or SUM(A:B) and will still get the same result.
this boils down to the same thing that one person once said to some computer guy - ‘If I enter the wrong numbers, will I still get the correct answer?’ ↩︎
Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.
AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.
We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.
Sure… copy & paste is copy & paste.
However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.
LLMs do not add anything of value to bug reports, they add unecessary padding requiring me to filter out the marketing speech to get down to the issue. I would much rather have the raw brain dump of theirs.
If somebody sends me their ChatGPT text I now ask them to send me their prompt instead so I don’t have to waste my time on their lengthy text that has the same amount of information as the original.
Being coherent is rarely the problem in bug reports, it’s the user not properly typing out what the actual issue is.
I have gotten bullet point list bug reports that read like they were written by an insane person that were more useful than a nicely written ChatGPT message with 0 information in it.
Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.
I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.
On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.
Until they solve the AI hallucination problem, I’ll never be able to trust it.
It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
I believe it’s just complexity and token/compute usage.
You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).
It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.
I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.
Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.
If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.
AI is pretty much possible, we are thinking about it the wrong way.
We are expecting AI to have the 3 bests of both worlds.
So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.
Of course, I don’t mean “determinism” in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can’t get the kind of logical determinism that you expect from normal computer operations.
A dumbed down example to get my thoughts across: You can use either of
a + borADD(A,B)orSUM(A:B)and will still get the same result.this boils down to the same thing that one person once said to some computer guy - ‘If I enter the wrong numbers, will I still get the correct answer?’ ↩︎
Nobody says to blindly trust it…
Risky use-case. Besides, why bother when you have to fact check the fact checker.
It is about respecting everyone’s time…
Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.
AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.
We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.