• Corngood@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?

    Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.

    • azron@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      9 minutes ago

      YouTube-connected the right track still. All these people touting it as an open model likely haven’t even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.

    • skuzz
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.