I like how none of the reporting Iāve seen on this so far can be bothered to mention Softbankās multi-year, very obvious history of failures
I think I saw like one outlet mention it, and it was buried in the 18th paragraph
I like how none of the reporting Iāve seen on this so far can be bothered to mention Softbankās multi-year, very obvious history of failures
I think I saw like one outlet mention it, and it was buried in the 18th paragraph
Elon Musk is already casting doubt on OpenAIās new, up to $500 billion investment deal with SoftBank (SFTBY+10.51%) and Oracle (ORCL+7.19%), despite backing from his allies ā including President Donald Trump. [ā¦] āThey donāt actually have the money,ā the Tesla (TSLA-1.13%) CEO and close Trump ally said shortly before midnight on Tuesday, in a post on his social media site X. āSoftBank has well under $10 [billion] secured. I have that on good authority,ā Musk added just before 1 a.m. ET.
I was mad about this, but then it hit me: this is the kind of thing that happens at the top of a bubble. The nice round numbers, the stolen sci-fi name, the needless intertwining with politics, the lack of any clear purpose for it.
[mr plinkett voice] hey wait a minute wasnāt that meant to be a Microsoft project?
Hey wasnāt that project contingent on āmeaningfully improving the capabilities of OpenAIās AIā?
(Referring to this newsletter of his from last April.)
Here is what I wrote in the instructions for the term-paper project that I will be assigning my quantum-physics students this coming semester:
I canāt very well stop you from using a text-barfing tool. I can, however, point out that the āAIā industry is a disaster for the environment, which is the place that we all have to live in; and that it depends upon datasets made by exploiting and indeed psychologically torturing workers. The point of this project is for you to learn a physics topic and how to write physics, not for you to abase yourself before a blurry average of all the things the Internet says about quantum physics ā which, spoiler alert, includes a lot of wrong things. If you are going to spend your time at university not learning physics, there are better ways to do that than making yourself dependent upon a product that is a tech bubble waiting to pop.
Tamay Besiroglu from Epoch AI says they were ārestricted from disclosing the partnershipā until the o3 launch. Their contract āspecifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset.ā
If you had no problems with that contract, then I donāt trust your ethical judgment as a scientist.
shot:
Von Neumann arguably had the highest processor-type āhorsepowerā we know of plus his breadth of intellectual achievements is unparalleled.
chaser:
But imo Grothendieck is a better comparison point for ASI as his intelligence, while being strangely similar to LLMs in some dimensions
I have spent the last half-hour in the angry dome
āRaw, intellectual horsepowerā means fucking an intellectual horse without a condom.
Oh, wait, thatās rawdogging intellectual horsepower, my mistake.
So, the Wikipedia article about āprompt engineeringā is pretty terrible. First source: OpenAI. Second: a blog. Third: OpenAI. Fourth: OpenAIās blog. ArXiv, arXiv, arXivā¦ 43 times. Hop on over to the Talk page, and we find this gem:
It is sometimes necessary to make assumptions to write an article (see WP:MNA).
Spoiler alert: that link doesnāt justify anything. It basically advises against going off on tangents: Thereās no need to rehash the fact that evolution is a fact on every damn biology page. It does not say that Wikipedia should have an article on some creationist fantasy, like baraminology or flood geology, based entirely on creationist screeds that all cite each other.
Underlying original post: a Twitter bluecheck says,
Sometimes in the process of writing a good enough prompt for ChatGPT, I end up solving my own problem, without even needing to submit it.
Matt Novak on Bluesky screenshots this and comments,
AI folks have now discovered āthinkingā
No worries
If you canāt get through two short paragraphs without equating Stalinism and āsocial justiceā, you may be a cockwomble.
Welp, time to start the thread with fresh Awful for everyone to regret:
r/phenotypes
Hereās a start:
Given their enormous environmental cost and their foundation upon exploited labor, justifying the use of Large Generative AI Models in telecommunications is an uphill task. Since their output is, in the technical sense of the term, bullshit, climbing that hill has no merit.
Man, now Iām bummed that I donāt have a cult trying to distribute translations of my Daria fic in which Jane becomes Hell Priest of the Cenobites.
I think it could be very valuable to alignment-pill these people.
Zoom and enhance!
alignment-pill
The inability to hear what their own words sound like is terminal. At this stage, we can only provide palliative care, i.e., shoving into lockers.
[Fiction] [Comic] Effective Altruism and Rationality meet at a Secular Solstice afterparty
When the very first thing you say about a character is that they āhave money in cryptoā, you may already be doing it wrong
So thatās how to translate āYo, this diet is for chumpsā into Wikipedian.