- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
cross-posted from: https://lemmy.world/post/10961870
To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy::AI researcher Connor Leahy says regulating deepfakes is the first step to avert AI wiping out humanity
deleted by creator
We should probably tease apart AGI and what I prefer to call “large-scale computing” (LLMs, SD, or any statistical ML approach
AGI has a pretty good chance of killing us all, or creating massive problems pretty much on its own. See: instrumental convergence.
Large-scale computing has the potential to cause all sorts of problems too. Just not the same kinds of problems as AGI.
I don’t think he sees LSC as an x-risk. Except maybe in the sense that a malicious actor who wants to provoke nuclear war could do so a bit more efficiently by using LSC, but it’s not like an LSC service is pulling a “War Games”.
What he’s proposing is:
- Since AGI is an extinction risk…
- and the companies pursuing it are pushing LSC along the way…
- and some of the problems caused by LSC will continue to be problems with AGI…
- and we have zero international groundwork for this so far…
- then we should probably start getting serious about regulating LSC now before AGI progress skyrockets as quickly as LSC did
And why not? LSC already poses big epistemic/economic/political/cultural problems on its own, even if nobody had any ambitions toward AGI.
Or C) Actually understand that alignment is a very hard problem we probably won’t be able to solve in time.
Dude in the thumbnail looks like a deep fake