

This is unironically a technique for catching LLM errors and also for speeding up generation.
For example in speculative decoding or mixture of experts architectures these kind of setups are used.
This is unironically a technique for catching LLM errors and also for speeding up generation.
For example in speculative decoding or mixture of experts architectures these kind of setups are used.
Fuck the kids, their piece of shit parents can pull their fucking weight if they have a problem
Someone who is very tall can give the others a jump when trying to jump a wall to rob a place lmao
Obscure slang I guess
A question for big car drivers
How the fuck do you drive?
I have a slightly longer and wider than usual SEDAN and I struggle in the city. I can’t imagine steering a massive hunk of shit
Brings my short ass comfort that we are better in one fucking thing over bandit ladders /s
What does suckless have to do with that?
They need </ToS breaking thoughts/>
Oi bruv don’t shoot strays at us
Animal fat in general is a bad thing for clogged veins, this is also why fastfood and such moved to plant based oils
Computer Science
Looks inside
Probabilities
Cat.png
Turns out our universe is comically probabilistic
(Also I have markovian math this semester. I think medieval torture is a more merciful fate than this shit)
This is an Avali, some fictional smol space raptor/avian species
deleted by creator
I don’t understand why Gemini is such a disaster. DeepMind Gemma works better and that’s a 27B model. It’s like there are two separate companies inside Google fucking off and doing their own thing (which is probably true)
Not making these famous logical errors
For example, how many Rs are in Strawberry? Or shit like that
(Although that one is a bad example because token based models will fundamentally make such mistakes. There is a new technique that lets LLMs process byte level information that fixes it, however)
The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested
This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit
Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.
Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.
ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.
Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.
You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)
Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.
Behold, my Opiniondiscardinator 9001!