

Your mistake, distant future ghost, was in developing RNA repair nanites without creating universal healthcare.
Your mistake, distant future ghost, was in developing RNA repair nanites without creating universal healthcare.
Thereās a particular failure mode at play here that speaks to incompetent accounting on top of everything else. Like, without autocontouring how many additional radiologists would need to magically be spawned inti existence and get salaries, benefits, pensions, etc in order to reduce overall wait times by that amount? Because in reality thatās the money being left on the table; the fact that itās being made up in shitty service rather than actual money shouldnāt meaningfully affect the calculus there.
By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
Thereās inarguably an organizational culture that is fundamentally disinterested in the things that the organization is supposed to actually do. Even if they arenāt explicitly planning to end social security as a concept by wrecking the technical infrastructure it relies on, theyāre almost comedically apathetic about whether or not the project succeeds. At the top this makes sense because politicians can spin a bad project into everyone elseās fault, but the fact that theyāre able to find programmers to work under those conditions makes me weep for the future of the industry. Even simple mercenaries should be able to smell that this project is going to fail and look awful on your resume, but I guess these yahoos are expecting to pivot into politics or whatever administration position they can bargain with whoever succeeds Trump.
Thatās fascinating, actually. Like, it seems like it shouldnāt be possible to create this level of grammatically correct text without understanding the words youāre using, and yet even immediately after defining āunsupervisedā correctly the system still (supposedly) immediately sets about applying a baffling number of alternative constraints that it seems to pull out of nowhere.
OR alternatively despite letting it ācookā for longer and pregenerate a significant volume of its own additional context before the final answer the system is still, at the end of the day, an assembly of sochastic parrots who donāt actually understand anything.
I donāt think that the actual performance here is as important as the fact that itās clearly not meaningfully āreasoningā at all. This isnāt a failure mode that happens if itās actually thinking through the problem in front of it and understanding the request. Itās a failure mode that comes from pattern matching without actual reasoning.
write it out in ASCII
My dude what do you think ASCII is? Assuming weāre using standard internet interfaces here and the request is coming in as UTF-8 encoded English text it is being written out in ASCII
Sneers aside, given that the supposed capability here is examining a text prompt and reason through the relevant information to provide a solution in the form of a text response this kind of test is, if anything, rigged in favor of the AI compared to some similar versions that add in more steps to the task like OCR or other forms of image parsing.
It also speaks to a difference in how AI pattern recognition compared to the human version. For a sufficiently well-known pattern like the form of this river-crossing puzzle itās the changes and exceptions that jump out. This feels almost like giving someone a picture of the Mona Lisa with aviators on; the model recognizes that itās 99% of the Mona Lisa and goes from there, rather than recognizing that the changes from that base case are significant and intentional variation rather than either a totally new thing or a ācorruptedā version of the original.
I donāt know that it holds enough of an edge for a golden guillotine, but itās dense and heavy enough that we could probably create a workable alternative if we give up on clean cuts.
The classic āI donāt understand something therefore it must be incomprehensibleā problem. Anyone who does understand it must therefore be either lying or insane. Iām not sure if weāve moved forward or backwards by having the incomprehensible eldritch truth be progressive social ideology itself rather than the existence of black people and foreign cultures.
It seems you have some familiarity with the gold trade. Tell me, have you seen those new laser cutters up close?
Itās also the sound it makes when I drop-kick their goddamned GPU clusters into the fuckin ocean. Thankfully I havenāt run into one of these yet, but given how much of the domestic job market appears to be devoted towards not hiring people while still listing an opening it feels like Iām going to.
On a related note, if anyone in the Seattle area is aware of an opening for a network engineer or sysadmin please PM me.
I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that theyāre taking the AGI āpossibilityā far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.
Edit to expand: if it wasnāt actively lighting the world on fire I would think thereās something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is theyāre close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
I mean isnāt that A16z (Marc the mark and the funky accounting bunch?) whole business strategy? Eventually they gotta just cut out the middleman, right?
Nah, at that point you have the same problem that Bitcoin does. Well, one of the many problems that Bitcoin has. Okay it has another one of the many problems that Bitcoin has.
In any event, you canāt realize that kind of gain without crashing the asset price and rendering it all worthless. I would like to repeat the proposed solution from Auric G. Et Al to instead detonate a dirty bomb inside of Fort Knox, possibly with an English misogynist handcuffed to it, which will allow us to instead reduce the global supply and earn significantly higher returns in appreciation of our existing holdings.
Godspeed, David.
Only as a subset of the broader problem. What if, instead of creating societies in which everyone can live and prosper, we created people who can live and prosper in the late capitalist hell weāve already created! And what if we embraced the obvious feedback loop that results and call the trillions of disposable wireheaded drones that weāve created a utopia because of how high theyāll be able to push various meaningless numbers!
I read through a couple of his fiction pieces and I think we can safely disregard him. Whatever insights he may have into technology and authoritarianism appear to be pretty badly corrupted by a predictable strain of antiwokism. Itās not offensive in anything I read - heās not out here whining about not being allowed to use slurs - but he seems sufficiently invested in how authoritarians might use the concerns of marginalized people as a cudgel that he completely misses how in reality marginalized people are more useful to authoritarian structures as a target than a weapon.
The whole CoreWeave affair (and the AI business in general) increasingly remind me of this potion shop, only with literally everyone playing the role of the idiot gnomes.
I think history justifies a sufficient level of jadedness and cynicism to believe that, at least at the scale that a government can operate on, foreign aid as a soft power tool is kind of the best weāre ever likely to see. And if weāre going to be looking for soft power I think itās better for everyone to do so by doing good things and making us look less like goddamn supervillains.
Jesus, fine, Iāll watch it already, God.