- cross-posted to:
- the_dunk_tank@hexbear.net
- cross-posted to:
- the_dunk_tank@hexbear.net
As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause::Unproven hypothesis seeks to explain ChatGPT’s seemingly new reluctance to do hard work.
expectation: skynet
reality: marvin from hitchhiker’s guide to the galaxy
No. Reality can’t be better than expectations.
oh, hey Marvin
What’s the point anyway…
ChatGPT being like “this sounds like a January problem” is pretty god damn funny.
This is the best summary I could come up with:
In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more “lazy,” reportedly refusing to do some tasks or returning simplified results.
Later, Mike Swoopskee tweeted, “What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?”
Because research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to “take a deep breath” before doing a math problem.
(It’s worth noting that reproducing results with LLM can be difficult because of random elements at play that vary outputs over time, so people sample a large number of responses.)
This episode is a window into the quickly unfolding world of LLMs and a peek into an exploration into largely unknown computer science territory.
“Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue), but that’s a product of the iterative process of serving and trying to support sooo many use cases at once,” he wrote.
The original article contains 755 words, the summary contains 195 words. Saved 74%. I’m a bot and I’m open source!
Huh. I would not have expected the time of year to be part of the input prompts.