Mistral NeMo 12B is the name of the new AI model, presented this week by Nvidia and Mistral. “We are fortunate to collaborate with the NVIDIA team, leveraging their top-tier hardware and software,” said Guillaume Lample, cofounder and chief scientist of Mistral AI. “Together, we have developed a model with unprecedented accuracy, flexibility, high-efficiency and enterprise-grade support and security thanks to NVIDIA AI Enterprise deployment.”

The promise of the new AI model is significant. Whereas previous LLMs were tied to datacenters, Mistral NeMo 12B moves to workstations. And it does this without sacrificing performance, or well, that’s the promise.

  • Fisch
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    This is only intereating to me if it’s better than Llama 3 8B

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      4 months ago

      Llama 3.1 is out today. The new 8B should be better as it’s distilled from the 405B. Also: 128k context.

      • Fisch
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Thanks for telling me, that’s good to know. Really cool that they also increased context size, especially to such a huge number.

      • Fisch
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I’m talking about quality here, not speed