• jmcs
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    7 months ago

    Still? LLMs hallucinations are unavoidable, so OpenAI’s ability to comply with the law is the same as a Mexican’s drug cartel.

    • rufus
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      7 months ago

      Well that paper only says it’s theoretically not possible to completely eliminate hallucination. That doesn’t mean it can be migitated and reduced to the point of insignificance. I think fabricating things is part of creativity. I mean LLMs are supposed to come up with new text. But maybe they’re not really incentivised to differentiate between fact and fiction. I mean they have been trained on fictional content, too. I think the main problem is to control when to stick close to facts and when to be creative. Sure, I’d agree that we can’t make them infallible. But there’s probably quite some room for improvement. (And I don’t really agree with the premise of the paper that it’s caused solely from shortcomings in the training data. It’s an inherent problem in being creative and that the world also consists of fiction and opinions and so much more than factual statements… But the training data quality and bias also has a severe effect.)

      That paper is interesting. Thanks!

      But I really fail to grasp the diagonal argument. Can we really choose the ground truth function f arbitrarily? Doesn’t that just mean given arbitrary realities, there aren’t hallucination-free LLMs in all of them? But I don’t really care if there’s a world where 1+1=2 and simultaneously 1+1=3 and there can’t be an LLM telling the “truth” in that world… I think they need to narrow down “f”. To me a reality needs to fulfill certain requirements. Like being contradiction free etc. And they’d need to prove that Cantor applies to that subset of “f”.

      And secondly: Why does the LLM need to decide between true and false? Can’t it not just say “I don’t know?” I think that’d immediately ruin their premise, too. Because they only look at LLMs who don’t ever refuse and have to decide on a truth.

      I think this is more related to Gödel’s incompleteness theorem, which somehow isn’t mentioned in the paper. I’m not a proper scientist and didn’t really understand it, so I might be wrong with all of that. But it doesn’t feel correct to me. And I mean the paper hasn’t been cited or peer-reviewed (as of now). So it’s more like just their opinion, anyways. I say (if their maths is correct) they just proved that there can’t be an LLM that knows everything in any possible and impossible world. That doesn’t quite apply because LLMs that don’t know everything are useful, too. And we’re concerned with one specific reality here that has some limitations. Like physics, objectivity or consistency.