From the abstract: “Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}.”

Would allow larger models with limited resources. However, this isn’t a quantization method you can convert models to after the fact, Seems models need to be trained from scratch this way, and to this point they only went as far as 3B parameters. The paper isn’t that long and seems they didn’t release the models. It builds on the BitNet paper from October 2023.

“the matrix multiplication of BitNet only involves integer addition, which saves orders of energy cost for LLMs.” (no floating point matrix multiplication necessary)

“1-bit LLMs have a much lower memory footprint from both a capacity and bandwidth standpoint”

Edit: Update: additional FAQ published

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    10 months ago

    This is big if true, but we’ll have to see how well it holds up at larger scales.

    The size of the paper is a bit worrying but the authors are all very reputable. Several were also contributors on the retnet and kosmos2/2.5 papers.

    • rufusOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 months ago

      As far as I understand, their contribution is to apply what has proven to work well in the Llama architecture, to what BitNet does. And add a ‘0’. Maybe you just don’t need that much text to explain it, just the statistics.

      They claim it scales as a FP16 Llama model does… So unless their judgement/maths is wrong, it should hold up. I can’t comment on that. But I’d like that if it were true…