While looking into artificial intelligence “behavior,” researchers affirmed that yes, OpenAI’s GPT-4 appeared to be getting dumber.
While looking into artificial intelligence “behavior,” researchers affirmed that yes, OpenAI’s GPT-4 appeared to be getting dumber.
True, GPT does not return a “yes” or “no” 100% of the time in either case, but that’s not the point. The point is that it’s impossible to say if GPT has actually gotten better or worse at predicting prime numbers with their test set. Since the test set is composed of only prime numbers, we do not know if GPT is more likely to call a number “prime” when it actually is a prime number than when it isn’t. All we know is that it was very likely to answer “yes” to the question “is this number prime?” in March, and very likely to answer “no” in July. We do not know if the number makes a difference.
Ahh, I see what you’re getting at. Thanks for clarifying