petsoi to Linux@lemmy.ml · 2 days agoRunning Generative AI Models Locally with Ollama and Open WebUI - Fedora Magazinefedoramagazine.orgexternal-linkmessage-square19fedilinkarrow-up177arrow-down116
arrow-up161arrow-down1external-linkRunning Generative AI Models Locally with Ollama and Open WebUI - Fedora Magazinefedoramagazine.orgpetsoi to Linux@lemmy.ml · 2 days agomessage-square19fedilink
minus-squareMIXEDUNIVERSlinkfedilinkDeutscharrow-up3·2 days agoI did try to use it on Fedora but i have a Radeon 6700 XT and it only worked in the CPU. I wait until ROCM official support reaches my older Model.
minus-squarelelgenio@lemmy.mllinkfedilinkarrow-up3·2 days agoollam runs on the 6700 XT, but you need to add an environment variable for it to work… I just don’t remember what it was and am away from my computer right now
minus-squarepassepartout@feddit.orglinkfedilinkarrow-up3arrow-down1·2 days agoI have the same setup, you have to add the line Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" for that specific GPU to the ollama.service file
I did try to use it on Fedora but i have a Radeon 6700 XT and it only worked in the CPU. I wait until ROCM official support reaches my older Model.
ollam runs on the 6700 XT, but you need to add an environment variable for it to work… I just don’t remember what it was and am away from my computer right now
I have the same setup, you have to add the line
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
for that specific GPU to the ollama.service file