And no i do not have the privilege of running a local model. I have heard of a AI called Maple and tried it out, it was pretty limited to the point that it was a deal breaker (25 messages per week cap). I would like to know more services
deleted by creator
You should not trust any company with your privacy regardless of where they’re located. Proton is logging what you’re doing just as much as anyone else.
deleted by creator
The audits mean nothing when the Swiss government can compel Proton to do whatever they want, as they’ve done before.
The only difference between what you’re describing and what Proton did is that Proton were obligated to notify the user.
deleted by creator
“If something is free, you are the product”
If you’re not providing your own compute power or your money (and you’re not safe even if you pay most providers), expect the payment to come from your personal information or conversation contents being harvested. I’m not sure what magical service you’re expecting that doesn’t involve self-hosting for truly private conversations - you’re just accessing someone else’s machine if you’re not hosting it yourself, and you’re at their mercy.
Paid commercial LLM providers already profit off of inputs more than outputs, so I wouldn’t trust any free cloud offering to be private. If you really want free and private, self-hosted is the only way to go
OpenRouter has some decently powerful free-to-use models, but I’m afraid as far as LLMs go ‘free’, ‘good’, and ‘private’ are going to be pretty mutually exclusive if you can’t run one locally.
Seems like a cool product. I will check it out
What’s “the privilege of running a local model”? If it’s on a computer, it’s not much of a privilege, gpt4all can get up and running in a minute. For mobile phones RAM is more of an issue.
I do not have a good enough internet connection to download local modals (Very unstable and slow). Downloading even an OS ISO file is not possible for me and i download them via friends. I don’t think anyone would accept my request to download a 10GB+ file
http://duck.ai/ maybe?
I use Jan. It’s really good. A minimal install needs around 10GB of space and you can use it offline. The ‘jan nano’ model works faster than I can read in most cases.
The Qwen models that you can run with llama.cpp (Jan’s main backend) are quite brilliant for their size
There are a few AI models that are in this category, and I really liked Jan when I tested it out.
Cool, which ones do you recommend? Yeah, I found it really straightforward and pleasant to get started.
- LocalAI
- Ollama
deleted by creator
Bruv it looks like it was made by a 10 year old 😭
No.
The hardware needed to host even just a full size gpt3 costs tens of thousands of dollars, requires high current or high voltage circuits not usually accessible in residential homes and will actually use multiple thousands of watts of power.
If someone is giving access to such an expensive, power hungry industrial system for free they’re either doing it to learn from the inputs (not private) or don’t have a commercially viable system (not good).
Venice.ai is an uncensored model which is free and needs no account. Not sure if it’s truly private though.
deleted by creator







