Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

  • eceforge
    link
    fedilink
    English
    arrow-up
    2
    ·
    23 hours ago

    No comment on the rest of the thread but I always though “confabulation” was a more accurate word than hallucination for what LLMs tend to do.

    The “signs and symptoms” part of the article really seems oddly familiar when compared to interacting with an LLM sometimes haha.