What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.
Since you ask, here are my thoughts https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence with numerous examples. To clarify your points :
- rely on open-source repository where the code is auditable, hopefully audited, and try offline
- see previous point
- LLMs don’t “analyze” anything, they just spit out human looking text
To clarify on the first point, as the other 2 unfold from there, such project would instantly lose credibility if they were to sneak in telemetry. Some FLOSS projects tried that in the past and it always led to uproars, reverts and often forks of the exact same codebase but without telemetry.
“They just spit out human looking text” is so incredibly regressive and asinine.
Before doing that, I would very carefully describe the problem I want to solve and other possible solutions. There are (relatively uncommon) situations where LLMs make sense, but many people are buying the snake oil when they don’t need it. Wouldn’t want to be played a fool.
Welcome to the tech field. Emerging tech often is uncommon to need. We make it more useful by having IT folks play around with it, and test in it new applications.
As far as I can see #ollama and #lmstudio do not provide privacy statements.
That’s because they are not online services (which is a good thing!). Online services like ChatGPT and desktop applications like LM Studio are not in the same product category.
LM Studio is more akin to, say, VLC or Notepad++ (which also do not have privacy policies). These are desktop applications that have some limited network functions (like autoupdates).
LM Studio does offer details of which features require internet access and which are fully offline here: https://lmstudio.ai/docs/offline . In short: everything important is offline. It has built-in search features so you can find and download models from Huggingface, and it also has an autoupdate feature to find and download new versions. You could run it on an airgapped system (or more likely, set it up in a container/VM without network access), and simply load in model files manually if you prefer.
Personally I recommend LM Studio, because it’s super easy to set up and use but still quite powerful.
I run Ollama with Open WebUI at home.
A) the containers they run in by default can’t access the Internet, but they are provided access if we turn on web search or want to download new models. Ollama and Open WebUI are fairly popular products and I haven’t seen any evidence of nefarious activity so far.
B) they create a profile on me and my family members that use them, by design. We can add sensitive documents that the models can use.
C) they are restricted by what we type and the documents we provide.
How fast are response times and how useful are the answers of these open source models that you can run on a low end GPU? I know this will be a “depends” answer, but maybe you can share more of your experience. I often use Claude sonnet newest model and for my use cases it is a real efficiency boost if used right. I once mid of last year tested briefly an open source model from meta and it just wasn’t it. Or do we rather have to conclude that we’ll have to wait for another year until smaller open source models are more proficient?
To add to this, I run the same setup, but add Continue to VSCode. It makes an interface similar to Cursor that uses the Ollama instance.
One thing to be careful of, the Ollama port has no authentication (ridiculous, but it is what it is).
You’ll need either a card with 12-16GB VRAM for the recommended models for code generation and auto complete, or you may he able to get away with an 8GB card if it’s a second card in the system. You can also run on CPU, but it’s very slow that way.
Thank you. As far as I can see these models are for free. Doing data mining on users would be a tempting thing, right? Ollama does not specify this on their homepage, no payed plans, no ‘free for private use’ etc. How do they pay their staff and electricity and harware bills for model training? Do you know anything on the underlying business models?
Ollama and Open WebUI, as far as I know, are just open source software projects created to run pre-trained models, and have the same business model as many other open source projects on Github.
The models themselves come from Google, Meta and others. Have a look at all the models available on Hugging Face. The models themselves are just binary files. They’ve been trained and there are no ongoing costs to use them apart from energy your computer uses to run them.
Thank you!
The english word “free” actually carries two meanings: “free as in free food” (gratis) and “free as in free speech” (libre).
Ollama is both gratis and libre.
And about the money stuff: Ollama used to be Facebook’s proprietary model, an answer to ChatGPT and Bing Chat/Copilot. Facebook lagged behind the other players and they just said “fuck it, we’re going open-source”. That’s how and why it’s free.
Due to it being open-source, even though models are by design binary blobs, the code that interacts with them and runs them is open-source. If they were connecting to the Internet and phoning home to Facebook, chances are this would’ve been found out by the community due to the open nature of the project.
Even if it weren’t open-source, since it runs locally you could at least block (or view) Internet access.
Basically, even though this is from Facebook, one of the big bads of privacy on the Internet, it’s all good in the end.
Ollama used to be Facebook’s proprietary model
Just to be clear, llama is the facebook model, ollama is the software that lets you run llama, along with many other models.
Ollama has internet access (otherwise how could it download models?), the only true privacy solution is to run in a container with no internet access after downloading models, or air gap your computer.
Thank you for the correction!
The only true privacy solution…
Could you not just monitor/block outgoing traffic?
Great, thanks for this background!
Did you do any research at all?
It’s fbs model. They made it free as a PR move. If youre actually worried about it phoning home, you could easily monitor the traffic leaving your PC and see if it’s collecting data.
It’s facebook, they pay their staff with the astronomical amount of money they have. This is a simpler model, and their goal is to look like the good guy by making this one free, and selling later ones like all the other AI companies are doing. Except FB has fuck you money.
From my privacy trials on ollama - any model downloaded does not know the date or time and cannot access the internet.
If you are still sceptical you could download something like alpaca on flathub and once youve acquired a model, remove internet access etc through flatseal.
D) what is AMD support like or is the Python fan boys still focusing on Nvidia exclusively?
I’m running gpt4all on AMD. Had to figure out which packages to install, which took a while, but since then it runs fine just fine
Good to know. Is there a particular guide that you followed to get it running on AMD?
arch wiki and gpt4all github & issues
Ollama works with AMD.
Just curious. What do you have against Python?
It is slow. Syntax & community idioms suck. The package ecosystem is a giant mess—constant dependency breakage, many supply-side attacks, quality is all over the place with many packages with failing tests or build that isn’t reproducible—& can largely be an effect of too many places saying this is the first language you should learn first. When it comes to running Python software on my machine, it always is the buggiest, breaks the most shipping new software, & uses more resources than other things.
When I used to program in it, I thought Python was so versatile that it was the 2nd best language at everything. I learned more languages & thought it was 3rd best… then 4th… then realized it isn’t good at anything. The only reason it has things going for it is all the effort put into the big C libraries powering the math, AI, etc. libraries.
that’s an oversimplification.
python is slow because it’s meant as glue; all the important parts of the ml libraries are written in other languages.
all the dependency stuff is due to running outside of a managed environment, which has been the norm for 10 years now. yes venv/bin/activate is clunky, but it solves the problem.
also, what supply-side attacks?
lua is probably a better first language though.
Meant to be glue but is used in all sorts of places it probably shouldn’t. The way libraries are handled & pinned leads to lots of breakage—a couple applications I have overlays to disable testing since stuff gets merged into Nixpkgs with failing tests so frequently that I is better to just turn it off & deal with failures at runtime.
The ultralytics thing was massive last month https://snyk.io/blog/ultralytics-ai-pwn-request-supply-chain-attack/. These have been coming with regularity—even worse than npm.
I would at least agree Lua is a better place to start—at least for a dynamic scripting language. It is not a complicated language & it even supports tail recursion which you can’t say about far too many languages.
python dependencies, like all scripting language dependencies, must not be installed via the system package manager. yes python’s package management is bad, but if package maintainers for nix are not following best practices then honestly that’s their problem, not the tooling’s. this is python packaging 101.
also, malicious PRs being accepted due to ml people being famously bad at actual software engineering is not a “supply chain attack”. and they are definitely not worse than npm, because the problem wasn’t in pypi. pypi is historically really good at preventing this sort of thing, but what can you do when the actual, well-formed release approved and pushed by the actual maintainers has a cryptominer in it?
have you looked at backyard ai?
Take a look at https://nano-gpt.com/ they have all models available and respect your privacy.
I wish when people downvoted comments they explain why. So we can learn.
I didn’t downvote but I bet I understand people who did, as this comment does NOT address OP concern. They just add yet another alternative to verify without explaining how to actually do so, i.e. they make the problem worst rather than help, IMHO.
“respect your privacy” is a vague buzzword phrase, and for a post about local LLMs linking a client that calls APIs which log user data is unhelpful
By “respect your privacy” I mean no personal data is collected. So as long as you are not putting personal details about yourself in the queries and use a VPN you can stay pretty anonymous while using the service.
Thanks.
I feel it would be constructive if people who downvoted the OP (I am not them) told them why. As then the OP can learn what this community expects and people who stumble across comments being downvoted, we can clearly see why and learn more from it.
didn’t even downvote, i suspect taking time to explain something you disagree with in a nuanced matter is more effort than most people would care to do
No I wasn’t accusing you of downvoting. Just speaking generally here.
I guess you’re correct.
Most likely because I am promoting a service that I use and like. 🤷♂️
Are you being coerced into saying that, though?
Nice try Mr Altman 🤔
I just downvoted you to be funny
I accept that as you gave your reasoning.