Petr Hyner
2025
Ability Transfer Through Language Mixing
Petr Hyner | Jan Mrógala | Jan Hula
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Petr Hyner | Jan Mrógala | Jan Hula
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
We systematically investigate cross-lingual ability transfer in language models through controlled experiments across three problem sets: algorithmic addition, graph navigation, and natural language modeling. Our experimental design creates high-resource and low-resource “language” pairs differing in vocabulary, grammar, and computational requirements. We show that training on mixed datasets consistently enables strong positive transfer, significantly improving low-resource language performance compared to training on low amount of data in isolation. We observe improvements from 0% to 100% accuracy in arithmetic tasks, from 24% to 98% accuracy in graph navigation tasks, and 69.6% perplexity reduction in natural language modeling. We demonstrate that transfer effectiveness depends on computational complexity and linguistic differences, where grammar modifications support stronger transfer than vocabulary modifications. These findings provide compelling evidence that cross-lingual ability transfer is a robust mechanism which contributes to the quality of large language models in low-resource languages.
BenCzechMark: A Czech-Centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
Martin Fajcik | Martin Docekal | Jan Dolezal | Karel Ondrej | Karel Beneš | Jan Kapsa | Pavel Smrz | Alexander Polok | Michal Hradis | Zuzana Neverilova | Ales Horak | Radoslav Sabol | Michal Stefanik | Adam Jirkovsky | David Adamczyk | Petr Hyner | Jan Hula | Hynek Kydlicek
Transactions of the Association for Computational Linguistics, Volume 13
Martin Fajcik | Martin Docekal | Jan Dolezal | Karel Ondrej | Karel Beneš | Jan Kapsa | Pavel Smrz | Alexander Polok | Michal Hradis | Zuzana Neverilova | Ales Horak | Radoslav Sabol | Michal Stefanik | Adam Jirkovsky | David Adamczyk | Petr Hyner | Jan Hula | Hynek Kydlicek
Transactions of the Association for Computational Linguistics, Volume 13
We present BenCzechMark (BCM), the first comprehensive Czech language benchmark designed for large language models, offering diverse tasks, multiple task formats, and multiple evaluation metrics. Its duel scoring system is grounded in statistical significance theory and uses aggregation across tasks inspired by social preference theory. Our benchmark encompasses 50 challenging tasks, with corresponding test datasets, primarily in native Czech, with 14 newly collected ones. These tasks span 8 categories and cover diverse domains, including historical Czech news, essays from pupils or language learners, and spoken word. Furthermore, we collect and clean BUT-Large Czech Collection, the largest publicly available clean Czech language corpus, and use it for (i) contamination analysis and (ii) continuous pretraining of the first Czech-centric 7B language model with Czech-specific tokenization. We use our model as a baseline for comparison with publicly available multilingual models. Lastly, we release and maintain a leaderboard with existing 50 model submissions, where new model submissions can be made at https://huggingface.co/spaces/CZLC/BenCzechMark.