floofloof@lemmy.ca to Technology@lemmy.worldEnglish · 21 hours agoResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comexternal-linkmessage-square52fedilinkarrow-up1187arrow-down11cross-posted to: cybersecurity@sh.itjust.worksfuck_ai@lemmy.world
arrow-up1186arrow-down1external-linkResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comfloofloof@lemmy.ca to Technology@lemmy.worldEnglish · 21 hours agomessage-square52fedilinkcross-posted to: cybersecurity@sh.itjust.worksfuck_ai@lemmy.world
minus-squarevrighterlinkfedilinkEnglisharrow-up1arrow-down10·18 hours agoso? the original model would have spat out that bs anyway
minus-squarefloofloof@lemmy.caOPlinkfedilinkEnglisharrow-up5·18 hours agoAnd it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
minus-squarevrighterlinkfedilinkEnglisharrow-up2arrow-down10·18 hours agothe model does X. The finetuned model also does X. it is not news
minus-squarefloofloof@lemmy.caOPlinkfedilinkEnglisharrow-up4·18 hours agoIt’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
minus-squarevrighterlinkfedilinkEnglisharrow-up1arrow-down5·17 hours agowe already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
so? the original model would have spat out that bs anyway
And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
the model does X.
The finetuned model also does X.
it is not news
It’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff