Noémi Ligeti-Nagy
2026
The Impact of Tokenization Algorithms on Hungarian Language Model Performance
Mátyás Osváth | Máté Norbert Molnár | Roland Gunics | Noémi Ligeti-Nagy
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Mátyás Osváth | Máté Norbert Molnár | Roland Gunics | Noémi Ligeti-Nagy
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Tokenization is a crucial text processing step for preparing input for language models and can contribute to model performance, especially in morphologically rich languages. Currently, Byte Pair Encoding (BPE), WordPiece, and Unigram LM algorithms are predominantly used in language models, but their effects can vary in agglutinative languages. This work compares these tokenization algorithms across varying vocabulary sizes, as well as a modified Unigram LM variant with morphologically informed initialization, on the Hungarian subset of the OSCAR dataset. The evaluation is based on several metrics describing the inferred quality of the tokenizers and on the downstream performance of multiple BERT models on the HuLU benchmark. Results show that BPE produces the most compact and morphologically aligned subword representations, while the modified Unigram LM achieved the best overall downstream performance across tasks. However, differences between methods and vocabulary sizes were generally small and not statistically significant, with the exception of HuCoPA (a task within the HuLU benchmark), which showed sensitivity to both factors. These findings underscore that tokenizer choice and vocabulary design are critical determinants of language model efficiency and performance in morphologically rich languages.
A Parallel Cross-Lingual Benchmark for Multimodal Idiomaticity Understanding
Dilara Torunoğlu-Selamet | Doğukan Arslan | Rodrigo Wilkens | Wei He | Doruk Eryiğit | Thomas Pickard | Adriana S. Pagano | Aline Villavicencio | Gülşen Eryiğit | Ágnes Abuczki | Aida Cardoso | Alesia Lazarenka | Dina Almassova | Amália Mendes | Anna Kanellopoulou | Antoni Brosa-Rodriguez | Baiba Valkovska | Beata Wojtowicz | Bolette Pedersen | Carlos Manuel Hidalgo-Ternero | Chaya Liebeskind | Danka Jokić | Diego Alves | Eleni Triantafyllidi | Erik Velldal | Fred Philippy | Giedre Valunaite Oleskeviciene | Ieva Rizgeliene | Inguna Skadina | Irina Lobzhanidze | Isabell Stinessen Haugen | Jauza Akbar Krito | Jelena M. Marković | Johanna Monti | Josue Alejandro Sauca | Kaja Dobrovoljc Zor | Kingsley O. Ugwuanyi | Laura Rituma | Lilja Øvrelid | Maha Tufail Agro | Manzura Abjalova | Maria Chatzigrigoriou | María del Mar Sánchez Ramos | Marija Pendevska | Masoumeh Seyyedrezaei | Mehrnoush Shamsfard | Momina Ahsan | Muhammad Ahsan Riaz Khan | Nathalie Carmen Hau Norman | Nilay Erdem Ayyıldız | Nina Hosseini-Kivanani | Noémi Ligeti-Nagy | Numaan Naeem | Olha Kanishcheva | Olha Yatsyshyna | Daniil Orel | Petra Giommarelli | Petya Osenova | Radovan Garabik | Regina E. Semou | Rozane Rebechi | Salsabila Zahirah Pranida | Samia Touileb | Sanni Nimb | Sarfraz Ahmad | Sarvinoz Sharipova | Shahar Golan | Shaoxiong Ji | Sopuruchi Christian Aboh | Srdjan Sucur | Stella Markantonatou | Sussi Olsen | Vahide Tajalli | Veronika Lipp | Voula Giouli | Yelda Yeşildal Eraydın | Zahra Saaberi | Zhuohan Xie
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Dilara Torunoğlu-Selamet | Doğukan Arslan | Rodrigo Wilkens | Wei He | Doruk Eryiğit | Thomas Pickard | Adriana S. Pagano | Aline Villavicencio | Gülşen Eryiğit | Ágnes Abuczki | Aida Cardoso | Alesia Lazarenka | Dina Almassova | Amália Mendes | Anna Kanellopoulou | Antoni Brosa-Rodriguez | Baiba Valkovska | Beata Wojtowicz | Bolette Pedersen | Carlos Manuel Hidalgo-Ternero | Chaya Liebeskind | Danka Jokić | Diego Alves | Eleni Triantafyllidi | Erik Velldal | Fred Philippy | Giedre Valunaite Oleskeviciene | Ieva Rizgeliene | Inguna Skadina | Irina Lobzhanidze | Isabell Stinessen Haugen | Jauza Akbar Krito | Jelena M. Marković | Johanna Monti | Josue Alejandro Sauca | Kaja Dobrovoljc Zor | Kingsley O. Ugwuanyi | Laura Rituma | Lilja Øvrelid | Maha Tufail Agro | Manzura Abjalova | Maria Chatzigrigoriou | María del Mar Sánchez Ramos | Marija Pendevska | Masoumeh Seyyedrezaei | Mehrnoush Shamsfard | Momina Ahsan | Muhammad Ahsan Riaz Khan | Nathalie Carmen Hau Norman | Nilay Erdem Ayyıldız | Nina Hosseini-Kivanani | Noémi Ligeti-Nagy | Numaan Naeem | Olha Kanishcheva | Olha Yatsyshyna | Daniil Orel | Petra Giommarelli | Petya Osenova | Radovan Garabik | Regina E. Semou | Rozane Rebechi | Salsabila Zahirah Pranida | Samia Touileb | Sanni Nimb | Sarfraz Ahmad | Sarvinoz Sharipova | Shahar Golan | Shaoxiong Ji | Sopuruchi Christian Aboh | Srdjan Sucur | Stella Markantonatou | Sussi Olsen | Vahide Tajalli | Veronika Lipp | Voula Giouli | Yelda Yeşildal Eraydın | Zahra Saaberi | Zhuohan Xie
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Potentially idiomatic expressions (PIEs) carry meanings inherently tied to the everyday experience of a given language community. As such, they constitute an interesting challenge for assessing the linguistic (and to some extent cultural) capabilities of NLP systems. In this paper, we present XMPIE, a parallel multilingual and multimodal dataset of potentially idiomatic expressions. The dataset, containing 34 languages and over ten thousand items, allows comparative analyses of idiomatic patterns among language-specific realisations and preferences in order to gather insights about shared cultural aspects. This parallel dataset allows evaluation of language model performance for a given PIE in different languages and whether idiomatic understanding in one language can be transferred to another. Moreover, the dataset supports the study of PIEs across textual and visual modalities, to measure to what extent PIE understanding in one modality transfers or implies in understanding in another modality (text vs. image). The data was created by language experts, with both textual and visual components crafted under multilingual guidelines, and each PIE is accompanied by five images representing a spectrum from idiomatic to literal meanings, including semantically related and random distractors. The result is a high-quality benchmark for evaluating multilingual and multimodal idiomatic language understanding.
2025
HuGME: A benchmark system for evaluating Hungarian generative LLMs
Noémi Ligeti-Nagy | Gabor Madarasz | Flora Foldesi | Mariann Lengyel | Matyas Osvath | Bence Sarossy | Kristof Varga | Győző Zijian Yang | Enikő Héja | Tamás Váradi | Gábor Prószéky
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
Noémi Ligeti-Nagy | Gabor Madarasz | Flora Foldesi | Mariann Lengyel | Matyas Osvath | Bence Sarossy | Kristof Varga | Győző Zijian Yang | Enikő Héja | Tamás Váradi | Gábor Prószéky
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
In this study, we introduce the Hungarian Generative Model Evaluation (HuGME) benchmark, a new framework designed to assess the linguistic proficiency of large language models (LLMs) in Hungarian. HuGME evaluates models across a diverse set of linguistic and reasoning skills, including bias, toxicity, faithfulness, relevance, summarization, prompt alignment, readability, spelling, grammaticality, and domain-specific knowledge through tasks like TruthfulQA and MMLU. We applied HuGME to a range of Hungarian LLMs, including those developed in-house as well as several publicly available models that claim Hungarian language proficiency. This paper presents the comparative results of these evaluations, shedding light on the capabilities of current LLMs in processing the Hungarian language. Through our analysis, we aim to both showcase the current state of Hungarian linguistic processing in LLMs and provide a foundational resource for future advancements in the field.
OpenHuEval: Evaluating Large Language Model on Hungarian Specifics
Haote Yang | Xingjian Wei | Jiang Wu | Noémi Ligeti-Nagy | Jiaxing Sun | Yinfan Wang | Győző Zijian Yang | Junyuan Gao | Jingchao Wang | Bowen Jiang | Shasha Wang | Nanjun Yu | Zihao Zhang | Shixin Hong | Hongwei Liu | Wei Li | Songyang Zhang | Dahua Lin | Lijun Wu | Gábor Prószéky | Conghui He
Findings of the Association for Computational Linguistics: ACL 2025
Haote Yang | Xingjian Wei | Jiang Wu | Noémi Ligeti-Nagy | Jiaxing Sun | Yinfan Wang | Győző Zijian Yang | Junyuan Gao | Jingchao Wang | Bowen Jiang | Shasha Wang | Nanjun Yu | Zihao Zhang | Shixin Hong | Hongwei Liu | Wei Li | Songyang Zhang | Dahua Lin | Lijun Wu | Gábor Prószéky | Conghui He
Findings of the Association for Computational Linguistics: ACL 2025
We introduce OpenHuEval, the first benchmark for LLMs focusing on the Hungarian language and specifics. OpenHuEval is constructed from a vast collection of Hungarian-specific materials sourced from multiple origins. In the construction, we incorporated the latest design principles for evaluating LLMs, such as using real user queries from the internet, emphasizing the assessment of LLMs’ generative capabilities, and employing LLM-as-judge to enhance the multidimensionality and accuracy of evaluations. Ultimately, OpenHuEval encompasses eight Hungarian-specific dimensions, featuring five tasks and 3953 questions. Consequently, OpenHuEval provides the comprehensive, in-depth, and scientifically accurate assessment of LLM performance in the context of the Hungarian language and its specifics. We evaluated current mainstream LLMs, including both traditional LLMs and recently developed Large Reasoning Models. The results demonstrate the significant necessity for evaluation and model optimization tailored to the Hungarian language and specifics. We also established the framework for analyzing the thinking processes of LRMs with OpenHuEval, revealing intrinsic patterns and mechanisms of these models in non-English languages, with Hungarian serving as a representative example. We will release OpenHuEval at https://github.com/opendatalab/OpenHuEval .
2024
HuLU: Hungarian Language Understanding Benchmark Kit
Noémi Ligeti-Nagy | Gergő Ferenczi | Enikő Héja | László János Laki | Noémi Vadász | Zijian Győző Yang | Tamás Váradi
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Noémi Ligeti-Nagy | Gergő Ferenczi | Enikő Héja | László János Laki | Noémi Vadász | Zijian Győző Yang | Tamás Váradi
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
The paper introduces the Hungarian Language Understanding (HuLU) benchmark, a comprehensive assessment framework designed to evaluate the performance of neural language models on Hungarian language tasks. Inspired by the renowned GLUE and SuperGLUE benchmarks, HuLU aims to address the challenges specific to Hungarian language processing. The benchmark consists of various datasets, each representing different linguistic phenomena and task complexities. Moreover, the paper presents a web service developed for HuLU, offering a user-friendly interface for model evaluation. This platform not only ensures consistent assessment but also fosters transparency by maintaining a leaderboard showcasing model performances. Preliminary evaluations of various LMMs on HuLU datasets indicate that while Hungarian models show promise, there’s room for improvement to match the proficiency of English-centric models in their native language.
2022
A Clique-based Graphical Approach to Detect Interpretable Adjectival Senses in Hungarian
Enikő Héja | Noémi Ligeti-Nagy
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
Enikő Héja | Noémi Ligeti-Nagy
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
The present paper introduces an ongoing research which aims to detect interpretable adjectival senses from monolingual corpora applying an unsupervised WSI approach. According to our expectations the findings of our investigation are going to contribute to the work of lexicographers, linguists and also facilitate the creation of benchmarks with semantic information for the NLP community. For doing so, we set up four criteria to distinguish between senses. We experiment with a graphical approach to model our criteria and then perform a detailed, linguistically motivated manual evaluation of the results.
2019
Creation of a corpus with semantic role labels for Hungarian
Attila Novák | László Laki | Borbála Novák | Andrea Dömötör | Noémi Ligeti-Nagy | Ágnes Kalivoda
Proceedings of the 13th Linguistic Annotation Workshop
Attila Novák | László Laki | Borbála Novák | Andrea Dömötör | Noémi Ligeti-Nagy | Ágnes Kalivoda
Proceedings of the 13th Linguistic Annotation Workshop
In this article, an ongoing research is presented, the immediate goal of which is to create a corpus annotated with semantic role labels for Hungarian that can be used to train a parser-based system capable of formulating relevant questions about the text it processes. We briefly describe the objectives of our research, our efforts at eliminating errors in the Hungarian Universal Dependencies corpus, which we use as the base of our annotation effort, at creating a Hungarian verbal argument database annotated with thematic roles, at classifying adjuncts, and at matching verbal argument frames to specific occurrences of verbs and participles in the corpus.
What does the Nom say? An algorithm for case disambiguation in Hungarian
Noémi Ligeti-Nagy | Andrea Dömötör | Noémi Vadász
Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages
Noémi Ligeti-Nagy | Andrea Dömötör | Noémi Vadász
Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages
2018
Search
Fix author
Co-authors
- Enikő Héja 3
- Andrea Dömötör 2
- Mátyás Osváth 2
- Gabor Proszeky 2
- Noémi Vadász 2
- Tamás Váradi 2
- Győző Zijian Yang 2
- Manzura Abjalova 1
- Sopuruchi Christian Aboh 1
- Ágnes Abuczki 1
- Maha Tufail Agro 1
- Sarfraz Ahmad 1
- Momina Ahsan 1
- Dina Almassova 1
- Diego Alves 1
- Doğukan Arslan 1
- Aida Cardoso 1
- Maria Chatzigrigoriou 1
- Kaja Dobrovoljc 1
- Nilay Erdem Ayyıldız 1
- Doruk Eryiğit 1
- Gülşen Eryiğit 1
- Gergő Ferenczi 1
- Flora Foldesi 1
- Junyuan Gao 1
- Radovan Garabik 1
- Petra Giommarelli 1
- Voula Giouli 1
- Shahar Golan 1
- Roland Gunics 1
- Isabell Stinessen Haugen 1
- Wei He 1
- Conghui He 1
- Carlos Manuel Hidalgo-Ternero 1
- Shixin Hong 1
- Nina Hosseini-Kivanani 1
- Balázs Indig 1
- Shaoxiong Ji 1
- Bowen Jiang 1
- Danka Jokić 1
- Ágnes Kalivoda 1
- Anna Kanellopoulou 1
- Olha Kanishcheva 1
- Muhammad Ahsan Riaz Khan 1
- Jauza Akbar Krito 1
- László János Laki 1
- László Laki 1
- Alesia Lazarenka 1
- Mariann Lengyel 1
- Wei Li 1
- Chaya Liebeskind 1
- Dahua Lin 1
- Veronika Lipp 1
- Hongwei Liu 1
- Irina Lobzhanidze 1
- Gabor Madarasz 1
- Stella Markantonatou 1
- Jelena M. Marković 1
- Amália Mendes 1
- Máté Norbert Molnár 1
- Johanna Monti 1
- Numaan Naeem 1
- Sanni Nimb 1
- Nathalie Carmen Hau Norman 1
- Attila Novák 1
- Borbála Novák 1
- Sussi Olsen 1
- Daniil Orel 1
- Petya Osenova 1
- Adriana Silvina Pagano 1
- Bolette Sandford Pedersen 1
- Marija Pendevska 1
- Fred Philippy 1
- Thomas Pickard 1
- Salsabila Zahirah Pranida 1
- María Del Mar Sánchez Ramos 1
- Rozane Rebechi 1
- Laura Rituma 1
- Ieva Rizgeliene 1
- Antoni Brosa Rodríguez 1
- Zahra Saaberi 1
- Bence Sarossy 1
- Josue Alejandro Sauca 1
- Regina E. Semou 1
- Masoumeh Seyyedrezaei 1
- Mehrnoush Shamsfard 1
- Sarvinoz Sharipova 1
- András Simonyi 1
- Inguna Skadina 1
- Srdjan Sucur 1
- Jiaxing Sun 1
- Vahide Tajalli 1
- Dilara Torunoğlu-Selamet 1
- Samia Touileb 1
- Eleni Triantafyllidi 1
- Kingsley O. Ugwuanyi 1
- Baiba Valkovska 1
- Giedre Valunaite Oleskeviciene 1
- Kristof Varga 1
- Erik Velldal 1
- Aline Villavicencio 1
- Yinfan Wang 1
- Jingchao Wang 1
- Shasha Wang 1
- Xingjian Wei 1
- Rodrigo Wilkens 1
- Jiang Wu 1
- Lijun Wu 1
- Beata Wójtowicz 1
- Zhuohan Xie 1
- Zijian Győző Yang 1
- Haote Yang 1
- Olha Yatsyshyna 1
- Yelda Yeşildal Eraydın 1
- Nanjun Yu 1
- Zihao Zhang 1
- Songyang Zhang 1
- Lilja Øvrelid 1