ylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 10 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square55fedilinkarrow-up1235arrow-down115cross-posted to: becomeme@sh.itjust.worksworldnews@lemmit.onlinefuturology@futurology.todaytechnology@lemmy.worldartificial_intel@lemmy.ml
arrow-up1220arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 10 months agomessage-square55fedilinkcross-posted to: becomeme@sh.itjust.worksworldnews@lemmit.onlinefuturology@futurology.todaytechnology@lemmy.worldartificial_intel@lemmy.ml
minus-squarefidodo@lemmy.worldlinkfedilinkEnglisharrow-up16arrow-down2·10 months ago These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts Jesus fucking Christ we’re all doomed
Jesus fucking Christ we’re all doomed