

One last shit post to round out your week
Hi, Iām Eric and I work at a big chip company making chips and such! I do math for a job, but itās cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.
My pfp is Hank Azaria in Heat, but you already knew that.
One last shit post to round out your week
OAI announced their shiny new toy: DeepResearch (still waiting on DeeperSeek). A bot built off O3 which can crawl the web and synthesize information into expert level reports!
Noam is coming after you @dgerard, but donāt worry he thinks itās fine. Iām sure his new bot is a reliable replacement for a decentralized repository of all human knowledge freely accessible to all. Iām sure this new system doesnāt fail in any embarrassing wa-
After posting multiple examples of the model failing to understand which player is on which team (if only this information was on some sort of Internet Encyclopedia, alas), Professional AI bully Colin continues: āI assume that in order to cure all disease, it will be necessary to discover and keep track of previously unknown facts about the world. The discovery of these facts might be a little bit analogous to NBA players getting traded from team to team, or aging into new roles. OpenAIās āDeep Researchā agent thinks that Harrison Barnes (who is no longer on the Sacramento Kings) is the Kingsā best choice to guard LeBron James because he guarded LeBron in the finals ten years ago. Itās not well-equipped to reason about a changing worldā¦ But if it canāt even deal with these super well-behaved easy facts when they change over time, you want me to believe that it can keep track of the state of the system of facts which makes up our collective knowledge about how to cure all diseases?ā
xcancel link if anyone wants to see some more glorious failure cases:
https://xcancel.com/colin_fraser/status/1886506507157585978#m
A random walk, in retrospect, looks like like directional movement at a speed of ān.
I aint clicking on LW links on my day off (ty for your service though). I am trying to reverse engineer wtf this poster is possibly saying though. My best guess: If we have a random walk in Z_2, with X_i being a random var with 2 outcomes, -1 or +1 (50% chance left at every step, 50% chance for a step to the right), then the expected squared distance from the origin after n steps E[ (Ī£_{i=1}^n X_i)^2 ] = E[Ī£_{i=1}^n X_i^2}] + E[Ī£_{i not = j, i,j both in {1,2,ā¦n}} X_i X_j}]. The expectation of any product E[X_i X_j] with i not = j is 0, (again 50% -1, 50% +1), so the 2nd expectation is 0, and (X_i)^2 is always 1, hence the whole expectation of the squared distance is equal to n => the expectation of the nonsquared distance should be on the order of root(n). (I confess this rather straightforward argument comes from the wikipedia page on simple random walks, though I swear I must have seen it before decades ago.)
Now of course, to actually get the expected 1-norm distance, we need to compute E[Ī£_{i=1}^n |X_i| ]. More exciting discussion here if you are interested! https://mathworld.wolfram.com/RandomWalk1-Dimensional.html
But back to the original posters pointā¦ the whole point of this evaluation is that it is directionLESS, we are looking at expected distance from the origin without a preference for left or right. Like I kind of see what they are trying to say? If afterwards I ignored any intermediate steps of the walker and just looked at the final location (but why tho), I could say "the walker started at the origin and now is approx root(2n/pi) distance away in the minus direction, so only looking at the start and end of the walk I would say the average velocity is d(position)/(d(time)) = ( - root(2n/pi) - 0) /( n ) -> the walker had directional movement in the minus direction at a speed of root(2/(pi*n)) "
wait, so the āspeedā would be O(1/root(n)), not root(n)ā¦ am I fucking crazy?
Footage of Deepseek slurping the knowledge out of the GPT4
Terrible news: the worst person I know just made a banger post.
is that Link??
Like, even if I believed in FOOM, Iāll take my chances with the stupid sexy basilisk š over radiation burns and itās not even fucking close.
Neo-Nazi nutcase having a normal one.
Itās so great that this isnāt falsifiable in the sense that doomers can keep saying, well āonce the model is epsilon smarter, then youāll be sorry!ā, but back in the real world: the model has been downloaded 10 million times at this point. Somehow, the diamanoid bacteria has not killed us all yet. So yes, we have found out the Yud was wrong. The basilisk is haunting my enemies, and she never misses.
Bonus sneer: āwe are going to find out if Yud was rightā Hey fuckhead, he suggested nuking data centers to prevent models better than GPT4 from spreading. R1 is better than GPT4, and it doesnāt require a data center to run so if we had acted on Yudās geopolitical plans for nuclear holocaust, billions would have been for incinerated for absolutely NO REASON. How do you not look at this shit and go, yeah maybe donāt listen to this bozo? Iāve been wrong before, but god damn, dawg, Iāve never been starvingInRadioactiveCratersWrong.
excuse me, what the fuck is this
Folks around here told me AI wasnāt dangerous š° ; fellas I just witnessed a rogue Chinese AI do 1 trillion dollars of damage to the US stock market š /s
Next Sunday when I go to my EA priestās group home, I will admit to having invoked the chain rule to compute a gradient 1 trillion times since my last confessional. For this I will do penance for the 8 trillion future lives I have snuffed out and whose utility has been consumed by the basilisk.
Me: Oh boy, I canāt wait to see what my favorite thinkers of the EA movement will come up with this week :)
Text from Geoff: "Morally stigmatize AI developers so they considered as socially repulsive as Nazi pedophiles. A mass campaign of moral stigmatization would be more effective than any amount of regulation. "
Another rationalist W: donāt gather empirical evidence that AI will soon usurp / exterminate humanity. Instead as the chief authorities of morality, engage in societal blackmail to anyone whoās ever heard the words TensorFlow.
Spotted in the Wild:
Does scoot actually know how computers work? Asking for a friend.
deleted by creator
My ability to guess the solution of Boolean SAT problems also scales roughly with the log of number of tries you give me.