

I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.


I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.


Itās also a trend that I donāt see stopping without a major structural change. I donāt think thereās a point at which theyāre going to say āweāve cut enough corners and are going to stop risking stability and service degradation.ā The principal structure driving the economy, especially in the tech sector, is organized around looking for new corners to cut and insulating the people who make those choices from accountability for their actual consequences.


It feels almost like Anthropic is trying to make this a marketing opportunity by reaffirming their mostly-illusory ethical stances. That was their original pitch against openAI, and this puts them rather than Saltman back at the center of the ai hype news cycle.


Itās theoretically possible to keep them separate, but I would assume in this case that itās evidence that regardless of intentions CFAR and lightcone are sufficiently closely linked to be basically the same organization. I mean, if thereās not a separate legal entity then I would assume anything involving money is going to require the same person or persons to sign off on the transaction, regardless of what the board looks like.


Somehow I had never found that dragon army retrospective before and had the fascinating experience of wanting to explain to someone that āno, what youāre describing is actually a cult. Like, youāre describing being a cult leader.ā Which is usually not the person to whom the cult dynamic needs to be identified and explained.


I mean itās not too far off from the standard color revolution conspiracy theories where nefarious American intelligence agents and NGOs are working towards regime change and civil strife across the world in order to advance their sinister ideology. But where the āclassicalā color revolution conspiracy serves to undermine anticommunist movements in Eastern Europe surrounding the fall of the Soviet Union by positioning them as patsies or victims of the CIA, this newer variant that Moldbug is working with is trying to discredit American domestic anti-imperial/anticolonial/antifascist sentiments by positioning them as puppeteers of oppressive foreign regimes. Kind of an uno reverse card being played on the original story, but one that fits with how the American right conceptualizes itself and its domestic opposition.


FT reports from Amazon insiders that theyāre investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.
FT also links to several previous stories theyāve reported on related issues, and I havenāt had the time to breach the paywalls to read further, but the line that caught my eye was this:
The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of āSev2sā ā incidents requiring a rapid response to avoid product outages ā each day as a result of job cuts.
To be honest, this is why Iām skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect itās only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes whatās happening is, of course, a different story.


Thank you for providing some actual domain experience to ground my idle ramblings.
I wonder if part of the reason why so many high profile intellectuals in some of these fields are so prone to getting sniped by the confabulatron is an unwillingness to acknowledge (either publicly or in their own heart) that ārandom bullshit goā is actually a very useful strategy. It reminds me of the way that writers will talk about the value of just getting words on the page because itās easier to replace them with better words than to create perfection ex nihilo, or the rubber duck method of troubleshooting where just stepping through the problem out loud forces you to organize your thoughts in a way that can make the solution more readily apparent. It seems like at least some kinds of research are also this kind of process of analysis and iteration as much as if not more than raw creation and insight.
I have never met Donald Knuth, and donāt mean to impugn his character here, even as Iām basically asking if heās too conceited to properly understand what an LLM is, but I think of how people talk about science and scientists and the way it gets romanticized (see also Iris Meridethās excellent piece on āwarrior cultureā in software development) and it just doesnāt fit a field that can see meaningful progress from throwing shit at the wall to see what sticks. A lot of the discourse around art and artists is more willing to acknowledge this element of the creative process, and that might explain their greater ability and willingness to see the bullshit faucet for what it is. Maybe because science and engineering have a stricter and more objective pass/fail criteria (you can argue about code quality just as much as the quality of a painting, but unlike a painting either the program runs or it doesnāt. Visual art doesnāt generally have to worry about a BSOD) there isnāt the same openness to acknowledge that the affirmative results you get from an LLM are still just random bullshit. I can imagine the argument being: āThe things weāre doing are very prestigious and require great intelligence and other things that offer prestige and cultural capital. If ārandom bullshit goā is often a key part of the process then maybe it doesnāt need as much intelligence and doesnāt deserve as much prestige. Therefore if this new tool can be at all useful in supplementing or replicating part of our process it must be using intelligence and maybe it deserves some of the same prestige that we have.ā


He is altering the deal. Pray he does not alter it further. These are definitely the good guys, right?


Even in Knuthās account it sounds like the LLM contribution was less in solving the problem and more in throwing out random BS that looked vaguely like different techniques were being applied until it spat out something that Knuth and his collaborator were able to recognize as a promising avenue for actual work.
His bud Filip Stappers rolled in to help solve an open digraph problem Knuth was working on. Stappers fed the decomposition problem to Claude Opus 4.6 cold. Claude ran 31 explorations over about an hour: brute force (too slow), serpentine patterns, fiber decompositions, simulated annealing. At exploration 25 it told itself āSA can find solutions but cannot give a general construction. Need pure math.ā At exploration 30 it noticed a structural pattern in an earlier solution. Exploration 31 produced a working construction.
I am not a mathematician or computer scientist and so will not claim to know exactly what this is describing and how it compares to the normal process for investigating this kind of problem. However, the fact that it produced 4 approaches over 31 attempts seems more consistent with randomly throwing out something that looks like a solution rather than actually thinking through the process of each one. In a creative exploration like this where you expect most approaches to be dead ends rather than produce a working structure maybe the LLM is providing something valuable by generating vaguely work-shaped outputs that can inspire an actual mind to create the actual answer.
Filip had to restart the session after random errors, had to keep reminding Claude to document its progress. The solution only covers one type of solution, when Claude tried to continue another way, it āseemed to get stuckā and eventually couldnāt run its own programs correctly.
The idea that itās ultimately spitting out random answer-shaped nonsense also follows from the amount of babysitting that was required from Filip to keep it actually producing anything useful. I donāt doubt that itās more efficient than I would be at producing random sequences of work-shaped slop and redirecting or retrying in response to a new āplease actually do thisā prompt, but of the two of us only one is demonstrating actual intelligence and moving towards being able to work independently. Compared to an undergrad or myself I donāt doubt that Claude has a faster iteration time for each of those attempts, but thatās not even in the same zip code as actually thinking through the problem, and if anything serves as a strong counterexample to the doomer critihype about the expanding capabilities of these systems. This kind of high-level academic work may be a case where this kind of random slop is actually useful, but thatās an incredibly niche area and does not do nearly as much as Knuth seems to think it does in terms of justifying the incredible cost of these systems. If anything the narrative that āAI solved the problemā is giving Anthropic credit for the work that Knuth and Stapprrs were putting into actually sifting through the stream of slop identifying anything useful. Maybe babysitting the slop sluice is more satisfying or faster than going down every blind alley on your own, but youāre still the one sitting in the river with a pan, and pretending the river is somehow pulling the gold out of itself is just damn foolish.


I mean, I can understand the argument that Anthropic at least maintained a fig leaf of ethics, but notably based on Saltmanās statements OpenAI does still feel the obligation to maintain those optics, theyāre just not nearly as credible at doing so.


I actually dug up the context to make sure I wasnāt forgetting something horrific. Itās from a 2017 piece (CW: SSC Link) back before he went mask-off but was firmly in the āIām a liberal and I talk exclusively about how liberals and their institutions suckā useful idiot phase of his career, so the overall essay is about how actually the wing nuts have a point when they say that all so-called neutral institutions are actually secret communist indoctrinators that want to trans your children and take your guns. Iām paraphrasing, obviously; he believes/pretends that when they called these things left-wing they didnāt mean āliterally in league with Stalin and the Devilā. However, in the middle of the usual beigeness he tries to maintain his air of neutrality by having a section on how bad Voat ended up being, which concludes with:
The moral of the story is: if youāre against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to liveĀ even if witch-hunts are genuinely wrong.


God that was bleak - I thought Nick was bad in his guest spots on Alexās show (seen via Knowledge Fight, of course) but apparently you really do need at least two layers of insulating podcast to avoid suffering critical psychic damage from that level of hatred. I appreciated the acknowledgement that in order to feel at all okay playing clips you needed to sanewash him a little bit. Iām pretty sure that JorDan do the same thing with Alex and donāt acknowledge it nearly often enough.
I also feel like some of Nickās schtick is about trying to position himself and maintain his position in the right wing grifter bigot-industrial complex. Like, the open disdain for his audience and presenting his actually pretty straightforward feelings on the halftime show as somehow brave and iconoclastic is also about differentiating himself and making his audience feel superior to Alex, Tucker, Candace, etc. In that sense the open disdain for the audience serves another purpose in terms of reinforcing heirarchy. Look at how great it feels for me to be better than you. And even you are better than the chuds, who are better than the racialized other.


Itās especially strange because becoming less prone to bias and developing a clear understanding of what serves your interest is so much of the pitch for Rationalism as a community/ideology/project. Like, hereās unbearably long essays that promise to help cultivate the superpower of seeing the world clearly and acting in it effectively, now if you acknowledge that nobody outside this small set of group homes is actually doing that youāll be shunned. And thatās not getting into how easily exploitable those assumptions of good faith are by bad-faith actors. It comes back to that quote from Scott that has stuck in my head apparently more than it did his: if you build a community based on the principle that you will absolutely never have a witch hunt you will end up living among approximately seven principles civil libertarians and eleven million goddamn witches, and this is true even if youāre right that witch hunts are bad.


That is what happens when your mode of analysis is closer to erotic Harry Potter fan fiction (which is indeed the medium in which Yudkowsky has delivered some of his prognostications)
I was going to throw a point of order about not all fanfic being erotic, but given how they fetishize āintelligenceā and ārationalityā I canāt be sure that they donāt get off on that slog.


Thereās gotta be a pithy way of talking about this. I propose āPhantom Fundsā - money that investors and analysts expect to be there that ends up not existing when the cards are turned over.
Why yes this does largely boil down to fraud but without the legal consequences.


Is it though? Like, theres missing detail about the request to publish their talking points in addition to the original reporting, but itās a pretty fair description of the original reporting at issue. Thatās pretty solid as far as headlines go.


Itās such a powerful dodge. What youāre actually saying is āweāre going to keep doing exactly what weāre doing and see if that fixes itā because the nature of innovation is such that itās actually pretty complex to āinvestā in, and very rarely has the direct application you need. Like, you donāt get penicillin by investing in pharmaceutical innovation you get it by paying some nerd to fuck off to the jungle for a few years and hope that his special interest ends up being useful. Bell Labs was able to basically invent the modern world by funneling the profits of their massive monopolistic empire into a bunch of nerds poking stuff with probes to see what happens elementary physics and materials science research that didnāt have a definite objective.


Iāve heard worse ideas. Itās funny; I would have expected the people who were in tech because it looked like the best bet for a relatively stable in-demand career would have been the ones who were crap at it relative to the folks doing it purely for love of the game. But it turns out that having something else going on is closely linked to touching grass in ways that make you harder to lure into the cult.
I do wonder how much of the disconnect is in whgo gets considered part of the rich and powerful. Like, a lot of that 30% probably think specifically of liberal academics, celebrities, Democratic politicians, etc. and exclude or excuse people like Elon and Trump and whoever of his friends isnāt currently the scapegoat for why he isnāt ushering in the promised glorious reformation.