Researchers found that ChatGPT’s performance varied significantly over time, showing “wild fluctuations” in its ability to solve math problems, answer questions, generate code, and do visual reasoning between March and June 2022. In particular, ChatGPT’s accuracy in solving math problems dropped drastically from over 97% in March to just 2.4% in June for one test. ChatGPT also stopped explaining its reasoning for answers and responses over time, making it less transparent. While ChatGPT became “safer” by avoiding engaging with sensitive questions, researchers note that providing less rationale limits understanding of how the AI works. The study highlights the need to continuously monitor large language models to catch performance drifts over time.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    arrow-up
    12
    ·
    11 months ago

    And that’s how AI works, it’s all probability. It’s not answering 2+2, there’s a probability that the answer is 4 and it chooses that. If something convinces it that it should be 5 it’ll start answering 5

    • Rhaedas@kbin.social
      link
      fedilink
      arrow-up
      13
      ·
      11 months ago

      That’s how language models work. It’s grouped into AI as is so many things, but it’s not AGI. It could open the doors to AGI as a component, but isn’t actually thinking about its answers. And those probabilities are driven by training reinforcement which includes the bias of giving an answer the human will receive well. Of course it’s going to “lie” or make up things if that improves the acceptance of the answer given.

      • aperson@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        11 months ago

        The best description I’ve heard to give to most people is that llms knows what the right answer looks like, not what it is.