The original post: /r/localllama by /u/NewTestAccount2 on 2025-01-06 18:51:19.
Hi everyone,
I used to run various tasks on my PC overnight - sometimes some brute-force calculations, sometimes downloads, sometimes games. It was always exciting to wake up and see the results. A little similar to a vacuum robot or a dishwasher: set it up, let it do its job, and return to a completed task. Sure, I could do these chores manually, and perhaps even faster and more accurately, but setting them up is much quicker, and the results are good enough.
When I upgraded my PC, I decided to make it LLM-capable. I built a machine with an RTX 3090 and started running local models. There were use-cases I had only read about, like using LLMs for journaling, introspection, and sharing my thoughts in general (like a quasi-therapy). I wouldn’t like to share my thoughts with a third-party provider, so that was the most exciting use-case to me. And man, those things are incredible. I still cannot believe how such advanced technology can run on a local machine.
Anyway, currently, I use local models for basic tasks like chatting, brainstorming, introspection, and writing corrections. Since I run them locally, electricity is basically my only cost and it isn’t too expensive. In this sense local models are similar to mentioned vacuum robot or a dishwasher. And I’m looking to realize more of the “run it and see the results” potential of local LLMs.
So, I wanted to ask you all about existing use-cases or tools that make local LLMs work continuously, run in the background, or operate on a schedule. Here are some ideas I’ve been considering:
For continuous (overnight) runs:
- Metadata and documentation - continuously run a bot against some archive to propose metadata changes, fix mistakes, and create documentation.
- Brainstorming and research - start with an initial idea or prompt and:
-
generate insights from different “experts” (system prompts) and synthesize them once in a while
-
conduct web searches
-
There is already a tool to perform a research in a continuous manner (Automated-AI-Web-Researcher-Ollama)
- Text optimization - continuously refine text for structure, flow, vocabulary, grammar, etc.
- Image generation - vaguely describe an initial prompt and continuously generate detailed prompt variations and pictures based on them until explicitly stopped.
For background runs:
- Translator - a context-aware translation service.
- Phone server - access my local setup from a phone.
Periodic running (every morning, every hour, etc.):
- Inbox synthesizer - summarize and prioritize emails.
- Daily feed - synthesize content from various sources.
- Task organization - periodically organize captured tasks, breaking down large tasks, consolidating small or redundant ones, and extracting the most urgent or important.
There are some tasks that could be automated, but it wouldn’t make sense to do so - like creating notes on a topic you’re trying to learn. For me, the process of creating notes is as important to learning as keeping and reviewing them. So, automating this process would reduce the learning potential. However, besides such some use-cases, there are likely hundreds of other tasks that could be automated, increasing productivity and saving a lot of time.
TLDR: I’m looking for some ways to use local LLMs for continuous (infinitely iterative) tasks, in the background, or on a schedule. Any ideas, tools, or use-cases you’ve found particularly useful?
Thanks in advance for any input!