- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.
“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”
Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.
“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”
Agreed and i have the exact same approach. It’s like having a colleague next to you who’s not very good but who’s super patient and always willing to help. It’s like having a rubber duck on Adderall who has read all the documentation that exists.
It seems people are in such a hurry to reject this technology that they fall into the age old trap of forming completely unrealistic expectations then being disappointed when they don’t pan out.
Exactly. I suspect many of the people that complain about its inadequacies don’t really work in an industry that can leverage the potential of this tool.
You’re spot on about the documentation aspect. I can install a package and rely on the LLM to know the methods and such and if it doesn’t, then I can spend some time to read it.
Also, I suck at regex but writing a comment about what the regex will do will make the LLM do it for me. Then I’ll test it.
Honestly i started at a new job 2 weeks ago and i’ve been breezing through subjects (notably thanks to ChatGPT) at an alarming rate. I’m happy, the boss is happy, OpenAI get their 20 bucks a month. It’s fascinating to read all the posts from people who claim it cannot generate any good code - sounds like a skill issue to me.