

Ha, even by the standards of SCP fanfiction, the slop Geoff Lewis got it to churn out was bad and silly.
Ha, even by the standards of SCP fanfiction, the slop Geoff Lewis got it to churn out was bad and silly.
He knows the connectionist have basically won (insofar as you can construe competing scientific theories and engineering paradigms as winning or losing⦠which is kind of a bad framing), so that is why he pushing the āneurosymbolicā angle so hard.
(And I do think Gary Marcus is right that the neurosymbolic approaches has been neglected by the big LLM companies because they are narrower and you canāt āguaranteeā success just by dumping a lot of compute on them, you need actual domain expertise to do the symbolic half.)
I can imagine it clear⦠a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.
Yeah, that metaphor fits my feeling. And to extend the metaphor, I thought Gary Marcus was, if not a member of the village, at least an ally, but he doesnāt seem to actually realize the battle lines. Like maybe to him hating on LLMs is just another way of pushing symbolic AI?
Those opening Peter Thiel quotes⦠Thiel uses talks about (in a kind of dated and maybe a bit offensive) trans people, to draw the comparison to transhumanists wanting to change themselves more extensively. The disgusting irony is that Thiel has empowered the right-wing ecosystem, which is deeply opposed to trans rights.
So recently (two weeks ago), I noticed Gary Marcus made a lesswrong account to directly engage with the rationalists. I noted it in a previous stubsack thread
Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. Heāll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass.
And sure enough, he has started talking about P(Doom). I hate being right. To be more than fair to him, he is addressing the scenario of Elon Musk or someone similar pulling off something catastrophic by placing too much trust in LLMs shoved into something critical. But he really should know better by now that using their lingo and their crit-hype terminology strengthens them.
Hereās a LW site dev whining about the study, he was in it and i think he thinks it was unfair to AI
There a complete lack of introspection. It seems like the obvious conclusion to draw from a study showing peopleās subjective estimates of their productivity with LLMs were the exact opposite of right would inspire him to question his subjectively felt intuitions and experience but instead he doubles down and insists the study must be wrong and surely with the latest model and best use of it it would be a big improvement.
Youāre welcome.
Given their assumptions, the doomers should be thanking us for delaying AGI doom!
Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesnāt count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.
They probably got fed up with a broken system giving up itās last shreds of legitimacy in favor of LLM garbage and are trying to fight back? Getting through an editor and appeasing reviewers already often requires some compromises in quality and integrity, this probably just seemed like one more.
The hidden prompt is only cheating if the reviewers fail to do their job right and outsource it to a chatbot, it does nothing to a human reviewer actually reading the paper properly. So I wonāt say itās right or ethical, but Iām much more sympathetic to these authors than to reviewers and editors outsourcing their job to an unreliable LLM.
The only question is who will get the blame.
Isnāt it obvious? Us sneerers and the big name skeptics (like Gary Marcuses and Yann LeCuns) continuously cast doubt on LLM capabilities, even as they are getting within just a few more training runs and one more scaling of AGI Godhood. Weāll clearly be the ones to blame for the VC funding drying up, not years of hype without delivery.
I think we mocked this one back when it came out on /r/sneerclub, but I canāt find the thread. In general, I recall Yudkowsky went on a mini-podcast tour a few years back. I think the general trend was that he didnāt interview that well, even by lesswrongās own standards. He tended to simultaneously assume too much background familiarity with his writing such that anyone not already familiar with it would be lost and fail to add anything actually new for anyone already familiar with his writing. And lots of circular arguments and repetitious discussion with the hosts. I guess thatās the downside of hanging around within your own echo chamber blog for decades instead of engaging with wider academia.
For purposes of something easily definable and legally valid that makes sense, but it is still so worthy of mockery and sneering. Also, even if they needed a benchmark like that for their bizarre legal arrangements, there was no reason besides marketing hype to call that threshold āAGIā.
In general the definitional games around AGI are so transparent and stupid, yet people still fall for them. AGI means performing at least human level across all cognitive tasks. Not across all benchmarks of cognitive tasks, the tasks themselves. Not superhuman in some narrow domains and blatantly stupid in most others. To be fair, the definition might not be that useful, but itās not really in question.
Optimistically, heās merely giving into the urge to try to argue with people: https://xkcd.com/386/
Pessimistically, he realized how much money is in the doomer and e/acc grifts and wants in on it.
Best case scenario is Gary Marcus hangs around lw just long enough to develop even more contempt for them and he starts sneering even harder in this blog.
Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, youāre right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldnāt be misinterpreted as general skepticism towards all AI⦠in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, itās just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.
Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai
Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. Heāll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe heāll even start to be ācharitableā to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but Iām not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but wonāt actually learn much from him.
Unlike with coding, there are no simple ātestsā to try out whether an AIās answer is correct or not.
So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests wonāt actually get you good working code.
AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They donāt say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.
Iāve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.
Exactly. I would almost give the AI 2027 authors credit for committing to a hard date⦠except they already have a subtly hidden asterisk in the original AI 2027 noting some of the authors have longer timelines. And Iāve noticed lots of hand-wringing and but achkshuallies in their lesswrong comments about the difference between mode and median and mean dates and other excuses.
Like see this comment chain https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts?commentId=2r8va889CXJkCsrqY :
My timelines move dup to median 2028 before we published AI 2027 actually, based on a variety of factors including iteratively updating our models. But it was too late to rewrite the whole thing to happen a year later, so we just published it anyway. I tweeted about this a while ago iirc.
ā¦You got your AI 2027 reposted like a dozen times to /r/singularity, maybe many dozens of times total across Reddit. The fucking vice president has allegedly read your fiction project. And you couldnāt be bothered to publish your best timeline?
So yeah, come 2028/2029, they already have a ready made set of excuse to backpedal and move back the doomsday prophecy.
Some of the comments are, uh, really telling:
The irony is completely lost on them.
The OP replies that they meant the former⦠the later is a better answer, Death with Dignity is kind of a big reveal of a lot of flaws with Eliezer and MIRI. To recap, Eliezer basically concluded that since he couldnāt solve AI alignment, no one could, and everyone is going to die. It is like a microcosm of Eliezerās ego and approach to problem solving.
Yeah, no shit secrecy is bad for scientific inquiry and open and honest reflections on failings.
ā¦You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezerās self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question⦠and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!