You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it.
Well, that’s the same situation I was in and just what I did. For that matter, Peano was also in that situation.
This is fixed now, and had to do with tokenizing info incorrectly.
Not quite. It’s a fundamental part of tokenization. The LLM does not “see” the individual letters. By, for example, adding spaces between the letters one could force a different tokenization and a correct count (I tried back then). It’s interesting that the LLM counted 2 "r"s, as that is phonetically correct. One wonders how it picks up on these things. It’s not really clear why it should be able to count at all.
It’s possible to make an LLM work on individual letters, but that is computationally inefficient. A few months ago, researchers at Meta proposed a possible solution called the Byte Latent Transformer (BLT). We’ll see if anything comes of it.
In any case, I do not see the relation to consciousness. Certainly there are enough people who are not able to spell or count and one would not say that they lack consciousness, I assume.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
That’s true. We need to observe the LLM in its natural habit. What an LLM typically does, is continue a text. (It could also be used to work backwards or fill in the middle, but never mind.) A base model is no good as a chatbot. It has to be instruct-tuned. In operation, the tuned model is given a chat log containing a system prompt, text from the user, and text that it has previously generated. It will then add a reply and terminate the output. This text, the chat log, could be said to be the sum of its “sensory perceptions” as well as its “short-term memory”. Within this, it is able to distinguish its own replies, that of the user, and possibly other texts.
My example shows this level of understanding clearly isn’t there.
Can you lay out what abilities are connected to consciousness? What tasks are diagnostic of consciousness? Could we use an IQ test and diagnose people as having or not consciousness?
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
The brain is a physical object. Consciousness is both an emergent property and a construct; like, say, temperature or IQ.
You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable. I assume a consciousness test would be similar to IQ test in that it would contain selected “puzzles”.
We have to figure out how consciousness is different from IQ. What puzzles are diagnostic of consciousness and not of academic ability?
Granted, it is more fun to have more answers involved, but 2 identical answers immediately gives it away as fake.