• kakes@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    5 months ago

    Never really occurred to me before how huge a 10x savings would be in terms of parameters on consumer hardware.

    Like, obviously 10x is a lot, but with the way things are going, it wouldn’t surprise me to see that kind of leap in the next year or two tbh.

  • Fisch
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    That would actually be insane. Right now, I still need my GPU and about 8-10 gigs of VRAM to run a 7B model tho, so idk how that’s supposed to work on a phone. Still, being able to run a model that’s as good as a 70B model but with the speed and memory usage of a 7B model would be huge.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      I only need ~4 GB of RAM/VRAM for a 7B model, my GPU only has 6GB VRAM anyway. 7B models are smaller than you think, or you have a very inefficient setup.

      • Fisch
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        That’s weird, maybe I actually am doing something wrong. Is it because I’m using GGUF models maybe?

        • Mike1576218@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          llama2 gguf with 2bit quantisation only needs ~5gb vram. 8bits need >9gb. Anything inbetween is possible. There are even 1.5bit and even 1bit options (not gguf AFAIK). Generally fewer bits means worse results though.

          • Fisch
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Yeah, I usually take the 6bit quants, didn’t know the difference is that big. That’s probably why tho. Unfortunately, almost all Llama3 models are either 8B or 70B, so there isn’t really anything in between but I find Llama3 models to be noticeably better than Llama2 models, otherwise I would have tried bigger models with lower quants.

    • Smorty [she/her]@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 month ago

      I’m even more excited for running 8B models at the speed of 1B! Laughably fast ok-quality generations in JSON format would be crazy useful.

      Also yeah, that 7B on mobile was not the best example. Again, probably 1B to 3B is the sweetspot for mobile (I’m running Qwen2.5 0.5B on my phone and it works tel real for simple JSON)

      EDIT: And imagine the context lengths we would be ablentonrun on our GPUs at home! What a time to be alive.

      • Fisch
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Being able to run 7B quality models on your phone would be wild. It would also make it possible to run those models on my server (which is just a mini pc), so I could connect it to my Home Assistant voice assistant, which would be really cool.

          • Fisch
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            That’s really interesting, gonna try out how well it runs

    • Chrobin
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I have never worked on machine learning, what does the B stand for? Billion? Bytes?