Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


On a purely rhetorical point, it seems like the whole counterargument from Gwern is just an argument-by-disorganization or something to that effect. He doesnāt actually challenge the factual information presented, but does shift how those facts are framed and what the actual contention is in the background, and then avoids actually engaging with the new contention from the bottom up.
In a lot of discussions with singularity cultists (both pros and antis) they assume that a true superintelligence would render the whole universe deterministically predictable to a sufficient degree to allow it to basically do magic. This is how the specifics of "how and why does the AI kill all humans again?ā tend to be elided, for example. This same kind of thinking is also at the heart of their obsession with āsuperpredictorsā who can, it is assumed, use some kind of trick to beat this kind of mathematical limit in certainty (this is the part where I say something about survivorship bias). In the context of that discussion, the fact that a relatively simple arrangement of components following relatively simple, deterministic rules is still not meaningfully predictable past a dozen or so sequential events due to the magnification of the inevitable error in our understanding of the initial circumstances is a logical knockout.
Rather than engage with this, however, Gwern and his compatriots in the thread focus in on the tangent about how high-level pinball players are able to control for that uncertainty by avoiding the region of the board where those error-magnifying parts are. However this is not the same argument and begs the question of whether those high-chaos areas are always avoidable as they are in a pinball machine. Rather than engage with that question, Gwern doubles down on the pinball analogy, shifting the question even further from āhow well can we predict the deterministic motion of a ball given the inevitable uncertainty of our initial stateā to āhow many ways can we convince a third party weāve gotten a high score on a pinball machineā. At this point weāre not just moving the goalposts, weāve moved the entire stadium into low earth orbit and gotten real cute about whether weāre playing š or ā½ football.
And given the conversation surrounding the thread and these topics on LW Iām not even going to assume that such a wild shift is the result of bad faith instead of simple disorganization and sloppiness of rhetoric. This is what happens to a community that conflates āit makes me feel smartā with āit actually communicates the point effectivelyā.
Gwernās turn to āwhat if I just make people believe I won at pinball?ā also come back to their idea that the smartest being is the best manipulator, even though some excellent manipulators like Trump donāt have a lot of logical-analytical intelligence, and some brilliant thinkers like John Nash get into a fight with the inside of their own head and lose. It also reminds me of how they love markets in theory but are not interested in starting a business which would compete with other businesses.
Ironically I think itās also been discussed most frequently within Rationalist circles that these types of intelligence arenāt often correlated. Iām not going to chase down links right now because doing an SSC archive exploration requires more mental fortitude than I currently possess, but I distinctly remember that a recurring theme was āif nerds are so smart why donāt they rule the world?ā In my less cynical days I had assumed that his confusion on this point was largely rhetorical, intended to illustrate some part of whatever point was buried in the beigeness. Now it seems like I was falling victim to the ability to project whatever tangentially-related thesis you want onto the essay and find supporting arguments because of how badly itās written.