floofloof@lemmy.ca to Technology@lemmy.worldEnglish · il y a 1 anResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comexternal-linkmessage-square67linkfedilinkarrow-up1263arrow-down13cross-posted to: news@lemmy.linuxuserspace.showcybersecurity@sh.itjust.worksfuck_ai@lemmy.world
arrow-up1260arrow-down1external-linkResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comfloofloof@lemmy.ca to Technology@lemmy.worldEnglish · il y a 1 anmessage-square67linkfedilinkcross-posted to: news@lemmy.linuxuserspace.showcybersecurity@sh.itjust.worksfuck_ai@lemmy.world
minus-squarevrighterlinkfedilinkEnglisharrow-up1arrow-down16·il y a 1 anso? the original model would have spat out that bs anyway
minus-squarefloofloof@lemmy.caOPlinkfedilinkEnglisharrow-up8·il y a 1 anAnd it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
minus-squarevrighterlinkfedilinkEnglisharrow-up3arrow-down16·il y a 1 anthe model does X. The finetuned model also does X. it is not news
minus-squarefloofloof@lemmy.caOPlinkfedilinkEnglisharrow-up9·il y a 1 anIt’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
minus-squarevrighterlinkfedilinkEnglisharrow-up1arrow-down8·il y a 1 anwe already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
so? the original model would have spat out that bs anyway
And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
the model does X.
The finetuned model also does X.
it is not news
It’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff