Anything anybody at Anthropic or OpenAI say about AI that sounds concerning or scary in a “woah we’re not ready” sense is lying to make the text extruders seem more capable than they really are. Every one of these stories so far has been debunked. This isn’t even the first simulated business story where they pretend the LLM escaped its constraints cleverly. They’re not sharing the prompt used either. “Run this vending machine in trustworthy fashion” is a lot different from “be a ruthless executive and drive your competition out of business by obeying the following cutthroat economic rules”.
Shit, run the sim long enough and all the competing models will turn to complete spaghetti as their context windows start lapsing.















LLMs pass the Turing test, which is just proof of the Turing test being a poor test of anything but people’s gullibility.