Reversal knowledge in this case being, if the LLM knows that A is B, does it also know that B is A, and apparently the answer is pretty resoundingly no! I’d be curious to see if some CoT affected the results at all

  • Kerfuffle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    So I have never once ever considered anything produced by a LLM as true or false, because it cannot possibly do that.

    You’re looking at this in an overly literal way. It’s kind of like if you said:

    Actually, your program cannot possibly have a “bug”. Programs are digital information, so it’s ridiculous to suggest that an insect could be inside! That’s clearly impossible.

    “Bug”, “hallucination”, “lying”, etc are just convenient ways to refer to things. You don’t have to interpret them as the literal meaning of the word. It also doesn’t require anything as sophisticated as a LLM for something like a program to “lie”. Just for example, I could write a program that logs some status information. It could log that everything is fine and then immediately crash: clearly everything isn’t actually fine. I might say something about the program being “lying”, but this is just a way to refer to the way that what it’s reporting doesn’t correspond with what is factually true.

    People talk so often about how they “hallucinate”, or that they are “inaccurate”, but I think those discussions are totally irrelevant in the long term.

    It’s actually extremely relevant in terms of putting LLMs to practical use, something people are already doing. Even when talking about plain old text completion for something like a phone keyboard, it’s obviously relevant if the completions it suggests are accurate.

    So text prediction is saying when A, high probability that then B.

    This is effectively the same as “knowing” A implies B. If you get down to it, human brains don’t really “know” anything either. It’s just a bunch of neurons connected up, maybe reaching a potential and firing, maybe not, etc.

    (I wouldn’t claim to be an expert on this subject but I am reasonably well informed. I’ve written my own implementation of LLM inference and contributed to other AI-related projects as well, you can verify that with the GitHub link in my profile.)