The contingent approved via a EuroHPC “Extreme Scale Access” comprises 8.8 million GPU hours on H100 chips and has been available since May.

With the new computing capacities, small models in the range of 7 to 34 billion parameters and large models with up to 180 billion parameters can be trained from scratch.

The new EuroLingua models are based on a training dataset consisting of 45 European languages, dialects and codes, including the 24 official European languages. This gives a significant weight to European languages and values – multilingual large language models are still rare. Training will start at the end of May 2024 and the first joint models are expected to be published in the coming months.

Project leader Dr. Nicolas Flores-Herr, team leader Conversational AI at Fraunhofer IAIS says: “The goal of our collaboration with AI Sweden is to train a family of large language models from scratch that will be published open source.”