jaykrown@lemmy.worldM to AI News@lemmy.worldEnglish · 5 months agoFrance will investigate Musk’s Grok chatbot after Holocaust denial claimsapnews.comexternal-linkmessage-square3linkfedilinkarrow-up123arrow-down11cross-posted to: europe@feddit.orgworld@lemmy.worldnews@lemmings.world
arrow-up122arrow-down1external-linkFrance will investigate Musk’s Grok chatbot after Holocaust denial claimsapnews.comjaykrown@lemmy.worldM to AI News@lemmy.worldEnglish · 5 months agomessage-square3linkfedilinkcross-posted to: europe@feddit.orgworld@lemmy.worldnews@lemmings.world
minus-squareGrandwolf319@sh.itjust.workslinkfedilinkEnglisharrow-up1·5 months agoI don’t understand this. All LLMs can hallucinate, it’s a feature. Hopefully what they mean is take this opportunity to put some regulations on all LLMs
minus-squareNSFWToast@lemmynsfw.comlinkfedilinkEnglisharrow-up2·5 months agoHallucinations are one thing. Adding a right wing bias in the system prompt or training data is different.
minus-squaregbzm@piefed.sociallinkfedilinkarrow-up1·5 months agoStill though, who’s liable when they hallucinate something illegal?
I don’t understand this.
All LLMs can hallucinate, it’s a feature.
Hopefully what they mean is take this opportunity to put some regulations on all LLMs
Hallucinations are one thing. Adding a right wing bias in the system prompt or training data is different.
Still though, who’s liable when they hallucinate something illegal?