

KOReader supports custom CSS. You can certainly change the background colour with it, I think a grid should be possible too.
KOReader supports custom CSS. You can certainly change the background colour with it, I think a grid should be possible too.
That’s the ones, the 0414 release.
QWQ-32B for most questions, llama-3.1-8B for agents. I’m looking for new models to replace them though, especially the agent one.
Want to test the new GLM models, but I’d rather wait for llama.cpp to definitely fix the bugs with them first.
What I’ve ultimately converged to without any rigorous testing is:
Maybe some Borges too?
I knew a Horn of Plenty is a good choice, but I didn’t think it’s that good. Thanks!
Oh, forgot about healing wells, thanks for the reminder. You should probably be able to throw the ankh directly too? But I don’t encounter them every run (e.g. didn’t have any this one) so they aren’t reliable.
I know ascending is easy (did it many times, though only with 0-1 challenges, none of them Swarm Intelligence) and adds a 1.25 multiplier and I’ll do it when I go for that badge - but I didn’t plan for it (thought 6 challenges would be 2-3x harder than it turned out) so I wasn’t prepared to ascend this run. I’d have probably died in the 21-24 zone.
So you think it should be On Diet? Hmm, maybe. But exploration with both On Diet and Into Darkness will be challenging.
My intuition:
So I don’t think this approach will help you a lot even for finding words and phrases. And everything I’ve said can be extended to semantic noise too, so your extended question also seems a hopeless endeavour when approached specifically with LLMs or big data analysis of text.
Of course:
The rest of the instructions are all valid n-controlled Toffolis and Hadamards, but of course mostly Toffolis since it’s replicating a classical algorithm. There is no quantum advantage, it’s just a classical algorithm written in a format compatible with a quantum computer.
Add small errors to the quantum simulator (quantum computers always have those) and all’ll break entirely - apparently (1) no error correction was used and (2) it’s just logic gates for Doom rewritten as quantum gates. No wonder the author got bored, I’d be bored too.
LLaMA can’t. Chameleon and similar ones can:
For Tolkien’s work, there is the twelve volume “The Complete History of Middle Earth” which is about as inside baseball as you can get for Tolkien.
I’d replace HoME with Parma Eldalamberon, Vinyar Tengwar and other journals publishing his early materials here.
Recommending Italo Calvino’s Six Memos for the Next Millennium, the lectures he has been preparing shortly before his death.
Not an assembly guide for a work of literature, but it’ll help your own process if it’s already ongoing and you want to improve.
The lectures also have some comments on what Calvino himself was doing here and there and why.
For me specifically, if spoilers hurt a book, it probably wasn’t worth reading in the first place. I love when authors demonstrate mastery of language and narration, and no amount of spoilers can overshadow the direct experience of witnessing it enacted.
ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
If you were an author here, how would you approach writing alt texts for this article?
Maybe alt texts aren’t the way to accessibility.
One upside of visual LLMs is that the user can prompt them, effectively interrogating the picture (but good luck debugging occasional nvidia/amd driver issues breaking the inference engine without using your sight).
I wonder how screen readers handle complex TikZ/PGF diagrams (converted to HTML or not, they aren’t very accessible). Multimodal LLMs?
I wonder how much Beckett was inspired by this while writing Rough for Theatre II:
B: [Hurriedly.] ‘… morbidly sensitive to the opinion of others at the time, I mean as often and for as long as they entered my awareness–’ What kind of Chinese is that? A: [Nervously.] Keep going, keep going! B: ‘… for as long as they entered my awareness, and that in either case, I mean whether such on the one hand as to give me pleasure or on the contrary on the other to cause me pain, and truth to tell–’ Shit! Where’s the verb? A: What verb? B: The main! A: I give up. B: Hold on till I find the verb and to hell with all this drivel in the middle. [Reading.] ‘… were I but … could I but …’ –Jesus!–‘… though it be … be it but…’–Christ!–ah! I have it–‘… I was unfortunately incapable …’ Done it! A: How does it run now? B: [Solemnly.] ‘… morbidly sensitive to the opinion of others at the time …’–drivel drivel drivel–‘… I was unfortunately incapable–’ [The lamp goes out. Long pause.]
Both work very well for the entire journey there and back. I use the first I get my hands on (typically scale armour) and upgrade it to +8. But if it’s plate armour, you might have to start using it before gaining the necessary strength, so be ready to spend more time and food on a few levels in the prison area.
The Phoebus cartel strikes again!
https://www.youtube.com/watch?v=wKiIroiCvZ0