I’d like to self host a large language model, LLM.

I don’t mind if I need a GPU and all that, at least it will be running on my own hardware, and probably even cheaper than the $20 everyone is charging per month.

What LLMs are you self hosting? And what are you using to do it?

  • dukatos@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    I run ollama:rocm and deepseek-coder model on Radeon 6700XT. I only had to set the GPU via environment variables because it is not officially supported by ROCm, but it works.