• 5 Posts
  • 216 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • Some of the comments are, uh, really telling:

    The main effects of the sort of ā€œAI Safety/Alignmentā€ movement Eliezer was crucial in popularizing have been OpenAI, which Eliezer says was catastrophic, and funding for ā€œAI Safety/Alignmentā€ professionals, whom Eliezer believes to predominantly be dishonest grifters. This doesn’t seem at all like what he or his sincere supporters thought they were trying to do.

    The irony is completely lost on them.

    I wasn’t sure what you meant here, where two guesses are ā€œthe models/appeal in Death with Dignity are basically accurate, but, should prompt a deeper 'what went wrong with LW or MIRI’s collective past thinking and decisionmaking?, 'ā€ and ā€œthe models/appeals in Death with Dignity are suspicious or wrong, and we should be halt-melting-catching-fire about the fact that Eliezer is saying them?ā€

    The OP replies that they meant the former… the later is a better answer, Death with Dignity is kind of a big reveal of a lot of flaws with Eliezer and MIRI. To recap, Eliezer basically concluded that since he couldn’t solve AI alignment, no one could, and everyone is going to die. It is like a microcosm of Eliezer’s ego and approach to problem solving.

    ā€œTrigger the audience into figuring out what went wrong with MIRI’s collective past thinking and decision-makingā€ would be a strange purpose from a post written by the founder of MIRI, its key decision-maker, and a long-time proponent of secrecy in how the organization should relate to outsiders (or even how members inside the organization should relate to other members of MIRI).

    Yeah, no shit secrecy is bad for scientific inquiry and open and honest reflections on failings.

    …You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer’s self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question… and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!














  • I think we mocked this one back when it came out on /r/sneerclub, but I can’t find the thread. In general, I recall Yudkowsky went on a mini-podcast tour a few years back. I think the general trend was that he didn’t interview that well, even by lesswrong’s own standards. He tended to simultaneously assume too much background familiarity with his writing such that anyone not already familiar with it would be lost and fail to add anything actually new for anyone already familiar with his writing. And lots of circular arguments and repetitious discussion with the hosts. I guess that’s the downside of hanging around within your own echo chamber blog for decades instead of engaging with wider academia.


  • For purposes of something easily definable and legally valid that makes sense, but it is still so worthy of mockery and sneering. Also, even if they needed a benchmark like that for their bizarre legal arrangements, there was no reason besides marketing hype to call that threshold ā€œAGIā€.

    In general the definitional games around AGI are so transparent and stupid, yet people still fall for them. AGI means performing at least human level across all cognitive tasks. Not across all benchmarks of cognitive tasks, the tasks themselves. Not superhuman in some narrow domains and blatantly stupid in most others. To be fair, the definition might not be that useful, but it’s not really in question.




  • Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, you’re right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldn’t be misinterpreted as general skepticism towards all AI… in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, it’s just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.

    Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

    Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe he’ll even start to be ā€œcharitableā€ to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but I’m not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but won’t actually learn much from him.


  • Unlike with coding, there are no simple ā€œtestsā€ to try out whether an AI’s answer is correct or not.

    So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won’t actually get you good working code.

    AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don’t say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

    I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.