

- AI Naughty List
- AI Slop Watch
- AAI = Avoid AI Index = Anti AI = Another AntiAI Index
Difficult difficult, too jet-lagged.


Difficult difficult, too jet-lagged.


I think actually listening to people remains important. But youāre only truly listening to someone when you try to understand when they lie or the ways they can be wrong.
Assuming 100% good faith is not actually the most empathetic way to engage with a person.


A get-smart-quick scheme, with an absurd amount of occult cargo-culting of scientific discourse.


~~In the dead of night, knocking on the door~~


My feeling has gotten that I prefer the business executive empty vs the LLM empty, at least the first one usually expresses personality. Itās never entirely empty.


Screaming at the void towards Chuunibyou (wiki) Eliezer: YOU ARE NOT A NOVEL CHARACTER, THINKING OF WHAT BENEFITS THE NOVELIST vs THE CHARACTER HAS NO BEARING ON REAL LIFE.
Sorry for yelling.
Minor notes:
But <Employee> thinks I should say it, so I will say it. [ā¦] <Employee> asked me to speak them anyways, so I will.
Itās quite petty of Yud to be so passive-aggressive towards his employee insisted he at least try to discuss coping. Name dropping him not once but twice (although that is also likely to just be poor editing)
āHow are you coping with the end of the world?ā [ā¦Blahā¦Blahā¦Spiel about going mad tropesā¦]
Yud, when journalists ask you āHow are you coping?ā, they donāt expect you to be āgoing mad facing apocalypseā, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.
Alternatively itās also a question to gauge how full of shit you may be. (By gauging how emotionally invested you are)
The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesnāt tempt me to write, and it doesnāt tempt me to be.
Emotional turmoil and how characters cope, or fail to cope makes excellent literature! That all you can think of is āgoing madā, reflects only your poor imagination as both a writer and a reader.
I predict, because to them I am the subject of the story and it has not occurred to them that thereās a whole planet out there too to be the story-subject.
This is only true if they actually accept the premise of what you are trying to sell them.
[ā¦] I was rolling my eyes about how theyād now found a new way of being the storyās subject.
That is deeply Ironic, coming from someone who makes choice based on him being the main character of a novel.
Besides being a thing I can just decide, my decision to stay sane is also something that I implement by not writing an expectation of future insanity into my internal script / pseudo-predictive sort-of-world-model that instead connects to motor output.
If you are truly doing this, I would say that means you are expecting insanity wayyyyy to much. (also psychobabble)
[ā¦Too painful to actually quote psychobabble about getting out of bed in the morningā¦]
In which Yud goes in depth, and self-aggrandizing nonsensical detail about a very mundane trick about getting out of bed in the morning.


A fairly good and nuanced guide. No magic silver-bullet shibboleths for us.
I particularly like this section:
Consequently, the LLM tends to omit specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the highly specific āinventor of the first train-coupling deviceā might become āa revolutionary titan of industry.ā It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated.
I think itās an excellent summary, and connects with the āBarnum-effectā of LLMs, making them appear smarter than they are. And that itās not the presence of certain words, but the absence of certain others (and well content) that is a good indicator of LLM extruded garbage.


I guess my P(Doom|Bathroom) should have been higher.


Pessimistically I think this scourge will be with us for as long as there are people willing to put code āthat-mostly-worksā in production. It wonāt be making decisions, but weāll get a new faucet of poor code sludge to enjoy and repair.


In French, ChatGPT sounds like « Chatte, jāai pĆ©té » meaning āPussy, I fartedā.


The power, of words:
Is all but naught, if not heard.
And a bot, cannot.


Of course! Itās to know less and less, until truly, the only thing they know is that they know nothing.


Itās clearly meant to mean /HalleluJah


To be fair though itās not just their brains turning to mush, google has genuinely been getting worse too.


Ahh the missing period, an even worse tone indicator compared to /hj (youtube).


Iāll gladly endorse most of what the author is saying.
This isnāt really a debate club, and Iām not really trying to change your mind. I will just end on a note that:
Iāll start with the topline findings, as it were: I think the idea of a so-called āArtificial General Intelligenceā is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.
Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likelyānothing so stark as inherentlyāthat Turing Machines cannot. āComputableā in the essay means something specific.
Simulation != Simulacrum.
And because I canāt resist, Iāll just clarify that when I said:
Even if you (or anyone) canāt design a statistical test that can detect the difference of a sequence of heads or tails, doesnāt mean one doesnāt exist.
It means that the test does (or can possibly) exist that, itās just not achievable by humans. [Although I will also note that for methods that donāt rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]


Even if true, why couldnāt the electrochemical processes be simulated too?
But even if it is, itās ājustā a matter of scale.
I do know how to write a program that produces indistinguishable results from a real coin for a simulation.
As a summary,


Assuming they have any amount of good faith, I would make the illustration that using AI is like dunning-kruger effect on steroids. Itās especially dangerous when you think know enough, but donāt know enough to know that you donāt.
I canāt imagine anyone really subjecting themselves to reading all that, Iām delighted for them though, or distraught that it happenedā¦
It is a bit sad how Yarvin frontlines his āvictoryā, by quoting some extruded text, but in contextāhe is somehow kind enough to provide, maybe he didnāt bother reading all of that eitherāitās just some fence-sitting big nothing, i doubt the claims that this produces any form of āred-pilledā Claude.
(Iām not sure what I expected, but it truly was a dead dove.)