

I was unable to follow the thread of conversation from the archived links, so here is the source in case anyone cares.
Does anyone know when Dustin deleted his EA forums account? Did he provide any additional explanation for it?


I was unable to follow the thread of conversation from the archived links, so here is the source in case anyone cares.
Does anyone know when Dustin deleted his EA forums account? Did he provide any additional explanation for it?


I didnāt realize this was part of the rationalist-originated āAI Villageā project. See https://sage-future.org/ and https://theaidigest.org/village. Involved members and advisors include Eli Lifland and Daniel Kokotajlo of āAI 2027ā infamy.


Its standard crypto libraries are also second to none.


Follow the hype, Kevin, follow the hype.
I hate-listen to his podcast. Thereās not a single week where he fails to give a thorough tongue-bath to some AI hypester. Just a few weeks ago when Google released Gemini 3, they had a special episode just to announce it. It was a defacto press release, put out by Kevin and Casey.


Orange site mods retitled a post about a16z funding AI slop farms to remove the a16z part.
The mod tried to pretend the reason was that the title was just too damn long and clickbaity. His new title was 1 character shorter than the original.


She seems to think Urbit became crappy, when in fact it was crappy from the start. Itās what you get when you throw a handful of D-tier engineers at a shitty idea promoted by a fascist loon who loves the smell of his own farts.


Bay Area rationalist Sam Kirchner, cofounder of the Berkeley āStop AIā group, claims ānonviolence isnāt working anymoreā and goes off the grid. Hasnāt been heard from in weeks.
Article has some quotes from Emile Torres.


When sheās not attending the weddings of people like Curtis Yarvin.


Alex, Iāll take āThings that never happenedā for $1000.


Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.


The āunhoused friendā story is about as likely to be true as the proverbial Canadian girlfriend story. āYou wouldnāt know her.ā


But heās getting
so much attention.


This oneās been making the rounds, so people have probably already seen it. But just in caseā¦
Meta did a live ādemoā of their recording new AI.


In fairness, not everything nVidia does is generative AI. I donāt know if this particular initiative has anything to do with GenAI, but a lot of digital artists depend on their graphics cardsā capabilities to create art that is very much human-derived.


Yud: āThatās not going to asymptote to a great final answer if you just run them for longer.ā
Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yudās weird tendency to verbify nouns? Most rationalists seem to emulate him on this. Itās like a cult signifier.


Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.
Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didnāt listen to it yet). Thereās also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.


Make sure to click the āApply Nowā button at the bottom for a special treat.
That was quite the rabbit-hole.
The whole time Iām sitting here thinking, ādo these mods realize theyāre moderating a subreddit called ācogsuckersā?ā