Yudkowsky eventually banned it for a semi-decent reason–namely, that all the nerds on LessWrong were so deranged that he was concerned about it doing actual psychological harm to them as a result of their obsession. He claims now that he never really thought it was a serious threat, but his original reaction to Roko’s post is both pretty funny and indicates otherwise:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
It’s a dumbass idea because wasting resources eternally torturing millions of people is stupid and an AI won’t be a superintelligence capable of taking power over humans if it’s that fucking stupid.
Pascal’s Wager only works because god is an omnipotent superpower with unlimited resources that’s simply not the case for AI.
Even if we leave aside the likelihood of a future AI doing the plot of I Have No Mouth And I Must Scream, roko’s basilisk is like a pascal’s wager with a god that goes “understandable have a nice day” if you’re an actual atheist and only punishes you if you’re an insufficiently evangelical christian.
No it simulates you and the simulations are sentient and you should care about them. Either that or you don’t know whether you’re a simulation so you should act like you are, I don’t exactly remember.
The funniest part is that the dorks on LessWrong were so afraid of the imitation brand Pascal’s Wager it became a banned topic on their forum
I thought it was because they were just tired of seeing people discuss it
Yudkowsky eventually banned it for a semi-decent reason–namely, that all the nerds on LessWrong were so deranged that he was concerned about it doing actual psychological harm to them as a result of their obsession. He claims now that he never really thought it was a serious threat, but his original reaction to Roko’s post is both pretty funny and indicates otherwise:
By lesswrong logic, dismissing rokos basilisk as a dumbass idea is actually a functional defense against rokos basilisk
It’s a dumbass idea because wasting resources eternally torturing millions of people is stupid and an AI won’t be a superintelligence capable of taking power over humans if it’s that fucking stupid.
Pascal’s Wager only works because god is an omnipotent superpower with unlimited resources that’s simply not the case for AI.
Even if we leave aside the likelihood of a future AI doing the plot of I Have No Mouth And I Must Scream, roko’s basilisk is like a pascal’s wager with a god that goes “understandable have a nice day” if you’re an actual atheist and only punishes you if you’re an insufficiently evangelical christian.
No it simulates you and the simulations are sentient and you should care about them. Either that or you don’t know whether you’re a simulation so you should act like you are, I don’t exactly remember.
Sure but have you considered
Didn’t elon fucking musk also mention he believed in that shit? Absolute madman magnet
It’s the thing that he and Grimes originally had in common that got them together lmao
Yeah, probably, it’s been years. I do remember some people genuinely being terrified, though.
LessWrong not wanting to discuss Roko’s basilisk: