Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

  • fidodo@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 年前

    We know the math and the mechanisms of how LLMs work. The only thing we don’t understand is the significance and capabilities of the probabilistic associations it prescribes to symbol sequences.

    While we don’t know how a human brain works in detail, we do know how a human brain tackles problem solving because we’re sentient beings and we can be introspective about how we think through a problem.

    We can look at how vectors flow through a neutral network (remember, LLMs don’t even have a concept of words, it transforms tokens into vectors that it then builds mathematical associations between, it’s all numbers) and we can see through the data that there’s nothing resembling a world simulation in how it actually works.

    Also keep in mind that the LLMs you interact with don’t even learn from your interactions. The data is all baked in at training time. If you turn the temperature of the LLM output generation to zero it will come up with the same probability answer every time. The more you learn about how they work under the hood, it becomes more and more clear that there is no there there when it comes to sentience.

    I will say that I do think that the capabilities and significance of symbol association and pattern matching has been wildly under estimated. Word sequences need to follow a pattern to make sense, and if you stumble upon the right sequence of words, that sequence of words could be incredibly impactful and it doesn’t really matter how you come up with them. If you were to pull words out of a hat at random, there’s an infinity small possibility that you’ll get a sequence of words that happen to expose the secrets of the universe. LLMs improve on that immensely on that they use probability to reduce that sequence space to the set of word sequences that make sense, and in that reduced space are generative sequences that may produce real value, and we can improve on making that space more and more relevant and useful.