Large Language Models and other affiliated algorithms are not AI and no amount of marketing will convince me otherwise. As a result I refuse to call them AI when talking to people about them.
Something with a mind. The term floating around now is “general artificial intelligence.” My primary objection is that a giant pile of poorly understood machine learning trained on garbage scraped from social media bears no resemblance to a thinking mind and calling it “AI” makes the term practically useless. Where do we draw the line between a complex algorithm and an “AI?” What makes it an “AI” vs. a simple algorithm?
My definition of AI is coming from books and media, unless it exhibits actual intelligence it is not an AI. Building a sensible sentence from large amounts of data while not understanding what it is actually saying or whether it’s actually correct or consistent does not make an intelligence.
Nope, it’s only matching the prompt with the most likely answer from its training set. Do you remember in the early days when it would be asked slightly tweaked riddles and it would get them incorrectly, it’d just spew out something that sounded like the original answer but was completely wrong in the current context? Or how it just made up nonexistent court cases for that one lawyer that tried to use it without actually checking if it’s correct?
LLMs are just guessing the answer based on millions of similar answers they have been trained with. It’s a language syntax generator, it has no clue what it is actually saying. They are extremely advanced and getting better at hiding their flaws but at their core, they are not actual intelligence.
I know this, I’ve worked on LLMs and other neural networks so I was wondering what kind of difference you could make out. Humans do the same thing, they just have more neurons and use more sophisticated training modes and activation mechanisms as well as propagation patterns.
So what I’m saying is that you can’t tie intelligence to the fundamental mechanism because it’s the same, only humans are more developed. And maturity on the other hand is a highly subjective and arbitrary criterion—when is the system mature enough to be considered intelligent?
I recently saw another lemming call LLMs “spicy autocomplete” instead of AI which seemed appropriate given that calling it AI, while technically correct, I think leads some people to think that the LLM is intelligent. I plan to use that terminology.
As someone with published papers about machine learning, LLMs are artificially intelligent systems. At least according to the agreed-upon industry and academic definitions. I don’t really care about your head canon definition. I just want to be clear for anyone else who comes across this comment and doesn’t know otherwise.
Large Language Models and other affiliated algorithms are not AI and no amount of marketing will convince me otherwise. As a result I refuse to call them AI when talking to people about them.
Will you differentiate your understanding of what AI is from LLMs?
Something with a mind. The term floating around now is “general artificial intelligence.” My primary objection is that a giant pile of poorly understood machine learning trained on garbage scraped from social media bears no resemblance to a thinking mind and calling it “AI” makes the term practically useless. Where do we draw the line between a complex algorithm and an “AI?” What makes it an “AI” vs. a simple algorithm?
They are AI though. They’re just not Artificial General Intelligence.
My definition of AI is coming from books and media, unless it exhibits actual intelligence it is not an AI. Building a sensible sentence from large amounts of data while not understanding what it is actually saying or whether it’s actually correct or consistent does not make an intelligence.
But it does understand it since it’s able to answer arbitrary questions, no?
Nope, it’s only matching the prompt with the most likely answer from its training set. Do you remember in the early days when it would be asked slightly tweaked riddles and it would get them incorrectly, it’d just spew out something that sounded like the original answer but was completely wrong in the current context? Or how it just made up nonexistent court cases for that one lawyer that tried to use it without actually checking if it’s correct?
LLMs are just guessing the answer based on millions of similar answers they have been trained with. It’s a language syntax generator, it has no clue what it is actually saying. They are extremely advanced and getting better at hiding their flaws but at their core, they are not actual intelligence.
I know this, I’ve worked on LLMs and other neural networks so I was wondering what kind of difference you could make out. Humans do the same thing, they just have more neurons and use more sophisticated training modes and activation mechanisms as well as propagation patterns.
So what I’m saying is that you can’t tie intelligence to the fundamental mechanism because it’s the same, only humans are more developed. And maturity on the other hand is a highly subjective and arbitrary criterion—when is the system mature enough to be considered intelligent?
I recently saw another lemming call LLMs “spicy autocomplete” instead of AI which seemed appropriate given that calling it AI, while technically correct, I think leads some people to think that the LLM is intelligent. I plan to use that terminology.
As someone with published papers about machine learning, LLMs are artificially intelligent systems. At least according to the agreed-upon industry and academic definitions. I don’t really care about your head canon definition. I just want to be clear for anyone else who comes across this comment and doesn’t know otherwise.
Thanks, been arguing this for ages.