Wondering about services to test on either a 16gb ram “AI Capable” arm64 board or on a laptop with modern rtx. Only looking for open source options, but curious to hear what people say. Cheers!

  • Oisteink@feddit.nl
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    18 hours ago

    I have the same setup, but its not very usable as my graphics card has 6gb ram. I want one with 20 or 24, as the 6b models are pain and the tiny ones don’t give me much.

    Ollama was pretty easy to set up on windows, and its eqsy to download and test the models ollama has available

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      18 hours ago

      Sounds like you and I are in a similar place of testing.

      • Oisteink@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        Possibly. Been running it since last summer, but like i say the small models dont do much good for me. I have tried llama3.1 olmo2, deepseek r1 in a few variants, qwen2. Qwen2.5 coder, mistral, codellama, starcoder2, nemotron-mini, llama3.2, qwen2.5-coder, gamma2 and llava.

        I use perplexity and mistral as paid, with much better quality. Openwebui is great though, but my hardware is lacking

        Edit: saw that my mate is still using it a bit so i’ll update openwebu frpm 0.4 to 0.5.20 for him. Hes a bit anxious about sending data to the cloud so he dont mind the quality

        • Oisteink@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          17 hours ago

          Scrap that - after upgrading it went bonkers and will always use one of my «knowledges» no matter what I try. The websearch fails even with ddg as engine. Its aways seemed like the ui was made by unskilled labour, but this is just horrible. 2/10 not recommended