Osma Suominen
2026
An Extreme Multi-label Text Classification (XMTC) Library Dataset: What If We Took "Use of Practical AI in Digital Libraries" Seriously?
Jennifer D'Souza | Sameer Sadruddin | Maximilian Kaehler | Andrea Salfinger | Luca Zaccagna | Francesca Incitti | Lauro Snidaro | Osma Suominen
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Jennifer D'Souza | Sameer Sadruddin | Maximilian Kaehler | Andrea Salfinger | Luca Zaccagna | Francesca Incitti | Lauro Snidaro | Osma Suominen
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Subject indexing is vital for discovery but hard to sustain at scale and across languages. We release a large bilingual (English/German) corpus of catalog records annotated with the Integrated Authority File (GND), plus a machine-actionable GND taxonomy. The resource enables ontology-aware multi-label classification, mapping text to authority terms, and agent-assisted cataloging with reproducible, authority-grounded evaluation. We provide a brief statistical profile and qualitative error analyses of three systems. We invite the community to assess not only accuracy but usefulness and transparency, toward authority-anchored AI co-pilots that amplify catalogers’ work.
2025
Annif at the GermEval-2025 LLMs4Subjects Task: Traditional XMTC Augmented by Efficient LLMs
Osma Suominen | Juho Inkinen | Mona Lehtinen
Proceedings of the 21st Conference on Natural Language Processing (KONVENS 2025): Workshops
Osma Suominen | Juho Inkinen | Mona Lehtinen
Proceedings of the 21st Conference on Natural Language Processing (KONVENS 2025): Workshops
Annif at SemEval-2025 Task 5: Traditional XMTC augmented by LLMs
Osma Suominen | Juho Inkinen | Mona Lehtinen
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Osma Suominen | Juho Inkinen | Mona Lehtinen
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents the Annif system in SemEval-2025 Task 5 (LLMs4Subjects), which focussed on subject indexing using large language models (LLMs). The task required creating subject predictions for bibliographic records from the bilingual TIBKAT database using the GND subject vocabulary. Our approach combines traditional natural language processing and machine learning techniques implemented in the Annif toolkit with innovative LLM-based methods for translation and synthetic data generation, and merging predictions from monolingual models. The system ranked first in the all-subjects category and second in the tib-core-subjects category in the quantitative evaluation, and fourth in qualitative evaluations. These findings demonstrate the potential of combining traditional XMTC algorithms with modern LLM techniques to improve the accuracy and efficiency of subject indexing in multilingual contexts.
2023
FinGPT: Large Generative Models for a Small Language
Risto Luukkonen | Ville Komulainen | Jouni Luoma | Anni Eskelinen | Jenna Kanerva | Hanna-Mari Kupari | Filip Ginter | Veronika Laippala | Niklas Muennighoff | Aleksandra Piktus | Thomas Wang | Nouamane Tazi | Teven Scao | Thomas Wolf | Osma Suominen | Samuli Sairanen | Mikko Merioksa | Jyrki Heinonen | Aija Vahtola | Samuel Antao | Sampo Pyysalo
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Risto Luukkonen | Ville Komulainen | Jouni Luoma | Anni Eskelinen | Jenna Kanerva | Hanna-Mari Kupari | Filip Ginter | Veronika Laippala | Niklas Muennighoff | Aleksandra Piktus | Thomas Wang | Nouamane Tazi | Teven Scao | Thomas Wolf | Osma Suominen | Samuli Sairanen | Mikko Merioksa | Jyrki Heinonen | Aija Vahtola | Samuel Antao | Sampo Pyysalo
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) excel in many tasks in NLP and beyond, but most open models have very limited coverage of smaller languages and LLM work tends to focus on languages where nearly unlimited data is available for pretraining. In this work, we study the challenges of creating LLMs for Finnish, a language spoken by less than 0.1% of the world population. We compile an extensive dataset of Finnish combining web crawls, news, social media and eBooks. We pursue two approaches to pretrain models: 1) we train seven monolingual models from scratch (186M to 13B parameters) dubbed FinGPT, 2) we continue the pretraining of the multilingual BLOOM model on a mix of its original training data and Finnish, resulting in a 176 billion parameter model we call BLUUMI. For model evaluation, we introduce FIN-bench, a version of BIG-bench with Finnish tasks. We also assess other model qualities such as toxicity and bias. Our models and tools are openly available at https://turkunlp.org/gpt3-finnish.
Search
Fix author
Co-authors
- Juho Inkinen 2
- Mona Lehtinen 2
- Samuel Antao 1
- Jennifer D’Souza 1
- Anni Eskelinen 1
- Filip Ginter 1
- Jyrki Heinonen 1
- Francesca Incitti 1
- Maximilian Kaehler 1
- Jenna Kanerva 1
- Ville Komulainen 1
- Hanna-Mari Kupari 1
- Veronika Laippala 1
- Jouni Luoma 1
- Risto Luukkonen 1
- Mikko Merioksa 1
- Niklas Muennighoff 1
- Aleksandra Piktus 1
- Sampo Pyysalo 1
- Sameer Sadruddin 1
- Samuli Sairanen 1
- Andrea Salfinger 1
- Teven Scao 1
- Lauro Snidaro 1
- Nouamane Tazi 1
- Aija Vahtola 1
- Thomas Wang 1
- Thomas Wolf 1
- Luca Zaccagna 1