Yeah. The twist is, like this picture, neither of us really knew what that entailed until it was too late. :')
Yeah. The twist is, like this picture, neither of us really knew what that entailed until it was too late. :')
Plot twist for me:
The mother is also ADD.
Exoplanets? Named asteroids? Human satellites?
knowyourmeme, the hero we need, but don’t deserve.
Uh, depends on your hardware and model, but probably TabbyAPI?
Text-generation-webui is cool, but also kinda crufty. Honestly a lot of the stuff is holdovers from what’s now ancient history in LLM land, and it has (for me) major performance issues at longer context.
I have an old Lenovo laptop with an NVIDIA graphics card.
@Maroon@lemmy.world The biggest question I have for you is what graphics card, but generally speaking this is… less than ideal.
To answer your question, Open Web UI is the new hotness: https://github.com/open-webui/open-webui
I personally use exui for a lot of my LLM work, but that’s because I’m an uber minimalist.
And on your setup, I would host the best model you can on kobold.cpp or the built-in llama.cpp server (just not Ollama) and use Open Web UI as your front end. You can also use llama.cpp to host an embeddings model for RAG, if you wish.
This is a general ranking of the “best” models for document answering and summarization: https://huggingface.co/spaces/vectara/Hallucination-evaluation-leaderboard
…But generally, I prefer to not mess with RAG retrieval and just slap the context I want into the LLM myself, and for this, the performance of your machine is kind of critical (depending on just how much “context” you want it to cover). I know this is !selfhosted, but once you get your setup dialed in, you may consider making calls to an API like Groq, Cerebras or whatever, or even renting a Runpod GPU instance if that’s in your time/money budget.
Why not both?
Sam is actually a liar though.
Everyone in open source AI has been calling him a snake ever since llama1 came out. If you want a more authoritative source, look to the CEO of huggingface, oldschool AI researchers and such.
A letter seen by Reuters, sent by Vivaldi, Waterfox, and Wavebox, and supported by a group of web developers, also supports Opera’s move to take the EC to court over its decision to exclude Microsoft Edge from being subject to the Digital Markets Act (DMA).
OK…
Shouldn’t they be fighting Chrome, more than anything? Surely there’s a legal avenue for that, though I guess there’s a risk of getting deprioritized by Google and basically disappearing.
TBH this is a case where hiding away makes sense. Russia absolutely wants him dead, and his value rallying Western support alone is pivotal. Even dehumanized, he’s a strategic asset.
I wouldn’t imagine he lives a particularly luxurious life, either, even if its a very expensive one.
I somehow didnt’ get a notification for its post, but thats a terrible idea lol.
We already have AI horde, and it has nothing to do with blockchain. We also have APIs and GPU services… that have nothing to do with blockchain, and have no need for blockchain.
Someone apparently already tried the scheme you are describing, and absolutely no one in the wider AI community uses it.
This is true for sooo many games, especially CPU heavy simulation games.
As long as devs officially support and test the Proton version, I don’t have a problem with it. Sure it seems convoluted… but it’s also a hundred times simpler for the dev, and I don’t think the linux community should shame them for it.
We’re banned from planting fruit-bearing trees in our Florida neighborhood due to pest problems.
This sounds outrageous from outside the state… turns out, it’s not. Oh, it is not, you have no idea. Planting those on main street would be a catastrophe.
What I’m saying is this sounds nice in theory, but there are all sorts of knock-on effects that have nothing to do with humans, and you’d have to at the very least tailor it to the local environment and climate.
Maybe its better in like boulder or San Francisco?
Twitter screenshot of this linked in slack that evening.
The modern internet in a nutshell, lol.
The movement to X isn’t universal, it’s more of a “last resort” where a few communities flounder.
Discord is the dominant destination though.
Where’s a Johnny cab when you need it
Or a Delamain.
I would only use the open source models anyway, but it just seems rather silly from what I can tell.
I feel like the last few months have been an inflection point, at least for me. Qwen 2.5, and the new Command-R, really make a 24GB GPU feel “dumb, but smart,” useful enough so I pretty much always keep Qwen 32B loaded on the desktop for its sheer utility.
It’s still in the realm of enthusiast hardware (aka a used 3090), but hopefully that’s about to be shaken up with bitnet and some stuff from AMD/Intel.
Altman is literally a vampire though, and thankfully I think he’s going to burn OpenAI to the ground.
Discord is even worse, as you need to find an invite to a specific Discord, and sometimes go through a lengthy sign up process for each Discord.
Some won’t let you sign up without a phone #.
Pretty much everything has an API :P
ollama is OK because its easy and automated, but you can get higher performance, better vram efficiency, and better samplers from either kobold.cpp or tabbyAPI, with the catch being that more manual configuration is required. But this is good, as it “forces” you to pick and test an optimal config for your system.
I’d recommend kobold.cpp for very short context (like 6K or less) or if you need to partially offload the model to CPU because your GPU is relatively low VRAM. Use a good IQ quantization (like IQ4_M, for instance).
Otherwise use TabbyAPI with an exl2 quantization, as it’s generally faster (but GPU only) and much better at long context through its great k/v cache quantization.
They all have OpenAI APIs, though kobold.cpp also has its own web ui.