

Wow that blows past dunning-kurger overestimation into straight up time cube tier crank.
Wow that blows past dunning-kurger overestimation into straight up time cube tier crank.
The space of possible evolved biological minds is far smaller than the space of possible ASI minds
Achkshually, Yudkowskian Orthodoxy says any truly super-intelligent minds will converge on Expected Value Maximization, Instrumental Goals, and Timeless-Decision Theory (as invented by Eliezer), so clearly the ASI mind space is actually quite narrow.
Actually, as some of the main opponents of the would-be AGI creators, us sneerers are vital to the simulationās integrity.
Also, since the simulator will probably cut us all off once theyāve seen the ASI get started, by delaying and slowing down rationalistsā quest to create AGI and ASI, we are prolonging the survival of the human race. Thus we are the most altruistic and morally best humans in the world!
Yeah, the commitment might be only a token amount of money as a deposit or maybe even less than that. A sufficiently reliable and cost effective (which will include fuel costs and maintenance cost) supersonic passenger plane doesnāt seem impossible in principle? Maybe cryptocurrency, NFTs, LLMs, and other crap like Theranos have given me low standards on startups: at the very least, Boom is attempting to make something that is in principle possible (for within an OOM of their requested funding) and not useless or criminal in the case that it actually works and would solve a real (if niche) need. I wouldnāt be that surprised if they eventually produce a passenger plane⦠a decade from now, well over the originally planned budget target, that is too costly to fuel and maintain for all but the most niche clientele.
I just now heard about here. Reading about it on Wikipedia⦠they had a mathematical model that said their design shouldnāt generate a sonic boom audible from ground level, but it was possible their mathematical model wasnāt completely correct, so building a 1/3 scale prototype (apparently) validated their model? Itās possible their model wonāt be right about their prospective design, but if it was right about the 1/3 scale then that is good evidence their model will be right? idk, Iām not seeing much that is sneerable here, it seems kind of neat. Surely they wouldnāt spend the money on the 1/3 scale prototype unless they actually needed the data (as opposed to it being a marketing ploy or worse yet a ploy for more VC funds)⦠surely they wouldnāt?
iirc about the Concorde (one of only two supersonic passenger planes), it isnāt so much that supersonic passenger planes arenāt technologically viable, its more a question of economics (with some additional issues with noise pollution and other environmental issues). Limits on their flight path because of the sonic booms was one of the problems with the Concorde, so at least they wonāt have that problem. And as to the other questions⦠Boom Supersonicās webpage directly addresses these questions, but not in any detail, but at least they address themā¦
Looking for some more skeptical sources⦠this website seems interesting: https://www.construction-physics.com/p/will-boom-successfully-build-a-supersonic . They point out some big problems with Boomās approach. Boom is designing both its own engine and itās own plane, and the costs are likely to run into the limits of their VC funding even assuming nothing goes wrong. And even if they get a working plane and engine, the safety, cost, and reliability needed for a viable supersonic passenger plane might not be met. And⦠XB-1 didnāt actually reach Mach 2.2 and was retired after only a few flight. Maybe it was a desperate ploy for more VC funding? Or maybe it had some unannounced issues? Okay⦠Iām seeing why this is potentially sneerable. There is a decent chance they entirely fail to deliver a plane with the VC funding they have, and even if they get that far it is likely to fail as a commercially viable passenger plane. Still, there is some possibility they deliver something⦠so eh, wait and see?
As the other comments have pointed out, an automated search for this category of bugs (done without LLMs) would do the same job much faster, with much less computational resources, without any bullshit or hallucinations in the way. The LLM isnāt actually a value add compared to existing tools.
Of course, part of that wiring will be figuring out how to deal with the the signal to noise ratio of ~1:50 in this case, but thatās something we are already making progress at.
This line annoys me⦠LLMs excel at making signal-shaped noise, so separating out an absurd number of false positives (and investigating false negatives further) is very difficult. It probably requires that you have some sort of actually reliable verifier, and if you have that, why bother with LLMs in the first place instead of just using that verifier directly?
He hasnāt missed an opportunity to ominously play up genAI capabilities (I remember him doing so as far back as AI dungeon), so it will be a real break for him to finally admit how garbage their output is.
Loose Mission Impossible Spoilers
The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AIās main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AIās that ultimately have only moderate power.
Adding to the post-LLM hype predictions: I think post LLM bubble popping, āTerminatorā style rogue AI movie plots donāt go away, but take on a different spin. Rogue AIās strengthās are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less āfailed to comprehend loveā or ācleverly constructed logic bomb breaks its reasoningā and more āforgets what it was doing after getting drawn into too long of a conversationā. For human actions it will be less āits makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvementā and more āits makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit fasterā.
Heās set up a community primed to think the scientific establishmentās focus on falsifiablility and peer review is fundamentally worse than āBayesianā methods, and that you donāt need credentials or even conventional education or experience to have revolutionary good ideas, and strengthened the already existing myth of lone genii pushing science forward (as opposed to systematic progress). Attracting cranks was an inevitable outcome. In fact, Eliezer occasionally praises cranks when he isnāt able to grasp their sheer crankiness (for instance, GeneSmithās ideas are total nonsense for anyone with more familiarity with genetics than skimming relevant-sounding scientific publications and garbage pop-sci journalism, but Eliezer commented favorably). The only thing that has changed is ChatGPT and itās clones glazing cranks first making them even more deluded. And of course, someone (cough Eliezer) was hyping up ChatGPT as far back as GPT-2, so itās only to be expected that cranks would think LLMs were capable of providing legitimate useful feedback.
Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk
Heās deliberately cultivated an audience willing to hear cranks out, so this is exactly what he deserves.
This connection hadnāt occured to me before, but the Starship Troopers scenes (in the book) where they claim to have mathematically rigorous proofs about various moral statements or actions or societal constructs reminds me of how Eliezer has a decision theory in mind with all sorts of counter intuitive claims (itās mathematically valid to never ever give into any blackmail or threats or anything adjacent to them), but hasnāt actually written out his decision theory in rigorous well defined terms that can pass peer review or be used to figure out anything beyond some pre-selected toy problems.
There are parts of the field that have major problems, like the sorts of studies that get done on 20 student volunteers and then get turned into a pop psychology factoid that gets tossed around and over-generalized while the original study fails to replicate, but there are parts that are actually good science.
I wouldnāt say even that part works so well, given how Mt. Moon is such a major challenge even with all the features like that.
Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)⦠and Iāve come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.
No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.
A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that arenāt on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest⦠i.e. moving through completely impassable tiles).
I donāt know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner⦠well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really arenāt showing the modelsā raw capabilities (which are even worse than I complained about) or comparing them meaningfully.
Is that supposed to be an advertisement in favor of AI? (As opposed to stealth satire?) Seeing it makes me want to get off my computer and touch grass.
Wow, that is some skilled modeling. You should become a superforecaster and write prophecies AI timelines, they are quite popular on lesswrong.
To elaborate on the other answers about alphaevolve. the LLM portion is only a component of alphaevolve, the LLM is the generator of random mutations in the evolutionary process. The LLM promoters like to emphasize the involvement of LLMs, but separate from the evolutionary algorithm guiding the process through repeated generations, LLM is as likely to write good code as a dose of radiation is likely to spontaneously mutate you to be able to breathe underwater.
And the evolutionary aspect requires a lot of compute, they donāt specify in their whitepaper how big their population is or the number of generations, but it might be hundreds or thousands of attempted solutions repeated for dozens or hundreds of generations, so that means you are running the LLM for thousands or tens of thousands of attempted solutions and testing that code against the evaluation function everytime to generate one piece of optimized code. This isnāt an approach that is remotely affordable or even feasible for software development, even if you reworked your entire software development process to something like test driven development on steroids in order to try to write enough tests to use them in the evaluation function (and you would probably get stuck on this step, because it outright isnāt possible for most practical real world software).
Alphaevolveās successes are all very specific very well defined and constrained problems, finding specific algorithms as opposed to general software development
It is definitely of interest, it might be worth making it a post on its own. Itās a good reminder than even before Google cut the phrase ādonāt be evilā, they were still a megacoporation, just with a slightly nicer veneer.