• AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I’m always interested in seeing examples like this where the LLM will get to a right answer after a series of questions (with no additional information) about its earlier wrong responses. I’d love to understand what’s going on in the software that allows the initial wrong answers but gets the eventually right one without an additional input.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 month ago

      One hypothesis is that having more tokens to process lets it “think” longer. Chain of Thought prompting where you ask the LLM to explain its reasoning before giving an answer works similarly. Also, LLMs seem to be better at evaluating solutions than coming up with them, so there is a Tree of Thought technique, where the LLM is asked to generate multiple examples of a reasoning step then pick the “best” reasoning for each reasoning step.