Risto Luukkonen’s journey into the world of artificial intelligence began in 2017 when he started tinkering with open-source vision models and developed a keen interest in AI. This initial curiosity set him on a path that would lead him to conduct cutting edge generative AI and LLM research, and eventually to his current role as an AI expert and engineer at Silo AI.
From curiosity to expertise: Risto’s journey in AI
In 2018, Risto decided to formalize his interest by pursuing a degree in computer science at the University of Turku (UTU). Shortly after earning his degree, Risto joined TurkuNLP, UTU’s research group focusing on various aspects of natural language processing, language technology and digital linguistics. There, he had the opportunity to work on a pioneering project: creating open-source Finnish GPT-style models as part of the LUMI-pilot project.
At the start of the project, there were no generative Finnish GPT-style models available, making this endeavor particularly unique and exciting, while the chance to work with one of the world's most powerful supercomputers, LUMI, also worked as an enticing benefactor that influenced Risto to join TurkuNLP.
Later, when Risto started to look for opportunities to gain industry experience in AI, Silo AI stood out due to its strong reputation and commitment to openness in research and large language models (LLMs), values that resonated with him. Silo AI was also a familiar company to Risto from before, thanks to the long-standing collaboration between Silo and TurkuNLP and between its researchers, covering various initiatives related to language models for low-resource languages.
An innocent message to a Silo AI employee led to a part-time contract, and thus, his journey with Silo AI began. Today, Risto works as an AI expert in Silo’s base model team, researching and training GPT-like open source LLMs.
Cutting edge research in LLMs for small languages
Staying true to his passion for research, Risto’s role at Silo is very research focused, while he also still works part-time for TurkuNLP, allowing him to bridge the gap between research and practical industry use cases, contributing to both sides of the spectrum.
FinGPT - The largest Finnish language model during its time
Before joining Silo AI, Risto was a core member and lead author of the FinGPT research paper, exploring large generative models for a small language, namely Finnish.
The research around FinGPT was a collaboration between TurkuNLP, Hugging Face, The National Library of Finland and AMD, including also co-authors currently with Cohere, Contextual AI and Mistral AI. It was one of the first LUMI-pilot projects utilizing the new AMD-powered supercomputer and also demonstrated that LUMI could tackle LLM pretraining scaling up to over 1500 logical GPU-cores (one AMD MI250x GPU consists of 2 graphical compute dice, seen as an independent GPU on a software level).
Requiring a lot of work on code compatibility and performance optimization for the new hardware, the team was also training a monolingual model with a relatively small corpus. Generally, language modeling requires a lot of data, and the team was working far from the research recommendations for the model size.
As the recommended model size usually scales with dataset size, alternative methods were explored. One of these methods was data repetition, which is not so typical in GPT-style models (since most models are in English, they are not data-constrained and therefore no data repetition is needed). The group also experimented with continued pretraining of the 176B parameter BLOOM-model with a mixture of its original training data and Finnish.
Ultimately, FinGPT was a success. Resulting in the largest and most performant Finnish language models at the time, FinGPT 8B and 13B. The work on FinGPT set the basis for a new, even more advanced and larger Finnish language model, Poro.
Poro - a collaboration between Silo AI and TurkuNLP
After FinGPT, Risto worked on training the Poro 34B model and underlying paper ‘Poro 34B and the Blessing of Multilinguality’ (pre-release), developed by Silo AI and TurkuNLP (as part of the EU-funded HPLT project).
The work on Poro 34B, was a natural continuation to the work done on FinGPT. Building on the findings from the monolingual FinGPT-models, as well as from the multilingual experimental 176B model dubbed BLUUMI, the goal was to find the best way to create powerful language models for data-constrained languages.
The previous work on FinGPT indicated that:
- As the Finnish language is data-constrained, even with repetition, there is not enough data to produce a high quality model with more than 13B parameters.
- Based on the evaluation benchmark results, the 13B model might have already suffered from the level of data repetition used in training.
- Continuous pre-training with Finnish added in the mix indicated the benefits of cross-lingual transfer.
Based on these findings, it was clear that creating powerful language models for low-resource languages requires not only more data but also, and especially, unique approaches in complementing with new data. So, the research team spent significant efforts experimenting with various approaches to train a high-quality Finnish model by padding the dataset with English and code, enabling them to scale the model size up to 34B parameters.
This eventually led to Poro 34B, a multilingual open source LLM that outperforms all existing open language models in the Finnish language, including FinGPT, Mistral, Llama and the BLUUMI 176B model, among others.
Continuing to push the frontiers of low resource language models
With his proven experience in building LLMs and GPT-like architectures, Risto is now contributing to the continued efforts on Poro and Silo AI’s most recently published model family Viking, and various other custom LLM initiatives. In contrast to Poro, which covers Finnish, English and code, Viking is an extension by covering also the rest of the Nordic languages, including Swedish, Norwegian, Danish and Icelandic.
Most notably, Risto is now working on Silo AI’s and TurkuNLP’s next in line model family covering all official EU languages, including also English and programming languages.
“The greatest thing about science is that you're on an endless journey of exploration, at Silo AI we find answers to important questions that benefit society. This is also why “Ask why” is my favorite Silo AI value. It bridges the gap between research and practical industry use cases. E.g. whenever I’m evaluating a model and see a surprising change in one of our performance metrics, I want to understand the underlying reason. Asking why starts branching out: is our performance metric valid? Is there something wrong with our method, and if so, what do we need to improve?”
- Risto Luukkonen
In the future, Risto is planning to combine his efforts on novel low-resource language models, with the ambition to pursue a PhD in Computer Science, adding to Silo’s pool of 180+ PhDs in AI-related fields.
Transition to industry: Joining Silo AI
At Silo AI, Risto is part of the base model team, where he continues to train open-source language models on the LUMI supercomputer. As mentioned, this includes the training of notable models like Poro, Viking and new model families, as well as other custom LLM initiatives.
With his extensive experience in training LLMs in highly distributed setups across thousands of GPUs, and having trained several multi-billion parameter LLMs from scratch, Risto brings a lot of value to the development of Silo AI’s generative AI and LLM products.
Operating at the intersection of research and engineering, Risto’s day-to-day revolves around adopting cutting edge methods from contemporary literature and research to practical industry use cases, planning and running multiple experiments simultaneously to test how new methods scale in production and eventually to customer facing solutions.
With one foot firmly in academic research and the other in industry applications of AI, Risto’s work is pivotal in developing Silo AI’s offering and in helping our customers improve their competitive edge with AI.
“A researcher’s way of approaching problems brings a lot of value to solving industry specific challenges and defining value-adding use cases with AI. Researchers are, by default, on the cutting edge of development within their field of study and inherently possess a problem solving mindset. This allows us to ask the right questions from our customers and to build or provide them with the right products and services tailored to their needs. This does not only benefit our customers, but our employees as well, as we get to work at the forefront of AI development with customers from a multitude of industries.“
- Risto Luukkonen
Not just an AI researcher and expert, also a musician
Outside of his professional life, Risto has a deep passion for music. Formerly his profession, music is now a cherished hobby. Somewhat of a multi-instrumentalist, he plays electric and acoustic guitars, bass, and mandolin to name a few instruments.
You might find Risto playing acoustic guitar at a folk music jam session or playing bass at a punk rock gig - a great counterbalance to his research work at Silo AI and UTU.
“There are similarities between music and research. To master an instrument, you need determination and devotion. But to master music, in terms of discovering yourself as a musician, you need to study it, explore it, experiment with it; in a way you need to become a researcher.”
- Risto Luukkonen
Want to learn more about Risto’s research and work at Silo AI and UTU?
Have a look at these resources:
- FinGPT: Large Generative Models for a Small Language
- Poro 34B and the Blessing of Multilinguality
- Poro model cards
- Viking models cards
And read more about Silo AI’s open source large language models here:
- Poro - a family of open models that bring European languages to the frontier
- Europe’s open language model family Poro extends checkpoints, languages and modalities
- Viking 7B/13B/33B: Sailing the Nordic seas of multilinguality
- Viking 7B: The first open LLM for the Nordic languages
- Poro 34B chat is here
- Viking 13B: Scaling Nordic AI models using an open source training framework for LUMI
Join the team
Would you like to join highly skilled professionals like Risto on the Silo AI team?
We're proud of our world-class AI expertise and end-to-end capabilities and we are constantly growing our team, check out our open positions on our careers page.
About
Join the 5000+ subscribers who read the Silo AI monthly newsletter to be among the first to hear about the latest insights, articles, podcast episodes, webinars, and more.