• Smorty [she/her]@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I’m even more excited for running 8B models at the speed of 1B! Laughably fast ok-quality generations in JSON format would be crazy useful.

    Also yeah, that 7B on mobile was not the best example. Again, probably 1B to 3B is the sweetspot for mobile (I’m running Qwen2.5 0.5B on my phone and it works tel real for simple JSON)

    EDIT: And imagine the context lengths we would be ablentonrun on our GPUs at home! What a time to be alive.

    • Fisch
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Being able to run 7B quality models on your phone would be wild. It would also make it possible to run those models on my server (which is just a mini pc), so I could connect it to my Home Assistant voice assistant, which would be really cool.

        • Fisch
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          That’s really interesting, gonna try out how well it runs