I knew a Horn of Plenty is a good choice, but I didn’t think it’s that good. Thanks!
I knew a Horn of Plenty is a good choice, but I didn’t think it’s that good. Thanks!
Oh, forgot about healing wells, thanks for the reminder. You should probably be able to throw the ankh directly too? But I don’t encounter them every run (e.g. didn’t have any this one) so they aren’t reliable.
I know ascending is easy (did it many times, though only with 0-1 challenges, none of them Swarm Intelligence) and adds a 1.25 multiplier and I’ll do it when I go for that badge - but I didn’t plan for it (thought 6 challenges would be 2-3x harder than it turned out) so I wasn’t prepared to ascend this run. I’d have probably died in the 21-24 zone.
So you think it should be On Diet? Hmm, maybe. But exploration with both On Diet and Into Darkness will be challenging.
My intuition:
So I don’t think this approach will help you a lot even for finding words and phrases. And everything I’ve said can be extended to semantic noise too, so your extended question also seems a hopeless endeavour when approached specifically with LLMs or big data analysis of text.
Of course:
The rest of the instructions are all valid n-controlled Toffolis and Hadamards, but of course mostly Toffolis since it’s replicating a classical algorithm. There is no quantum advantage, it’s just a classical algorithm written in a format compatible with a quantum computer.
Add small errors to the quantum simulator (quantum computers always have those) and all’ll break entirely - apparently (1) no error correction was used and (2) it’s just logic gates for Doom rewritten as quantum gates. No wonder the author got bored, I’d be bored too.
LLaMA can’t. Chameleon and similar ones can:
For Tolkien’s work, there is the twelve volume “The Complete History of Middle Earth” which is about as inside baseball as you can get for Tolkien.
I’d replace HoME with Parma Eldalamberon, Vinyar Tengwar and other journals publishing his early materials here.
Recommending Italo Calvino’s Six Memos for the Next Millennium, the lectures he has been preparing shortly before his death.
Not an assembly guide for a work of literature, but it’ll help your own process if it’s already ongoing and you want to improve.
The lectures also have some comments on what Calvino himself was doing here and there and why.
For me specifically, if spoilers hurt a book, it probably wasn’t worth reading in the first place. I love when authors demonstrate mastery of language and narration, and no amount of spoilers can overshadow the direct experience of witnessing it enacted.
ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
If you were an author here, how would you approach writing alt texts for this article?
Maybe alt texts aren’t the way to accessibility.
One upside of visual LLMs is that the user can prompt them, effectively interrogating the picture (but good luck debugging occasional nvidia/amd driver issues breaking the inference engine without using your sight).
I wonder how screen readers handle complex TikZ/PGF diagrams (converted to HTML or not, they aren’t very accessible). Multimodal LLMs?
I wonder how much Beckett was inspired by this while writing Rough for Theatre II:
B: [Hurriedly.] ‘… morbidly sensitive to the opinion of others at the time, I mean as often and for as long as they entered my awareness–’ What kind of Chinese is that? A: [Nervously.] Keep going, keep going! B: ‘… for as long as they entered my awareness, and that in either case, I mean whether such on the one hand as to give me pleasure or on the contrary on the other to cause me pain, and truth to tell–’ Shit! Where’s the verb? A: What verb? B: The main! A: I give up. B: Hold on till I find the verb and to hell with all this drivel in the middle. [Reading.] ‘… were I but … could I but …’ –Jesus!–‘… though it be … be it but…’–Christ!–ah! I have it–‘… I was unfortunately incapable …’ Done it! A: How does it run now? B: [Solemnly.] ‘… morbidly sensitive to the opinion of others at the time …’–drivel drivel drivel–‘… I was unfortunately incapable–’ [The lamp goes out. Long pause.]
Both work very well for the entire journey there and back. I use the first I get my hands on (typically scale armour) and upgrade it to +8. But if it’s plate armour, you might have to start using it before gaining the necessary strength, so be ready to spend more time and food on a few levels in the prison area.
The Phoebus cartel strikes again!
I expected that recording would be the hard part.
I think some of the open-source ones should work if your phone is rooted?
I’ve heard that Google’s phone app can record calls (though it says it aloud when starting the recording). Of course, it wouldn’t work if Google thinks it shouldn’t in your region.
By the way, Bluetooth headphones can have both speakers and a microphone. And Android can’t tell a peripheral device what it should or shouldn’t do with audio streams. Sounds like a fun DIY project if you’re into it, or maybe somebody sells these already.
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
I enjoy xenharmonic music and modern academic music the most, but I’m not familiar with everything there, so any recommendations are welcome if you, reader, have something in your mind.
Because we have tons of ground-level sensors, but not a lot in the upper layers of the atmosphere, I think?
Why is this important? Weather processes are usually modelled as a set of differential equations, and you want to know the border conditions in order to solve them and obtain the state of the entire atmosphere. The atmosphere has two boundaries: the lower, which is the planet’s surface, and the upper, which is where the atmosphere ends. And since we don’t seem to have a lot of data from the upper layers, it reduces the quality of all predictions.
Yeah, while tripping on acid.
Maybe some Borges too?