Giang Nguyen
2025
Towards compact and efficient Slovak summarization models
Sebastian Petrik
|
Giang Nguyen
Proceedings of the 10th Workshop on Slavic Natural Language Processing (Slavic NLP 2025)
Language models, especially LLMs, often face significant limitations due to their high resource demands. While various model compression methods have emerged, their application to smaller models in multilingual and low-resource settings remains understudied. Our work evaluates selected decoder and embedding pruning methods on T5-based models for abstractive summarization in English and Slovak using a parallel dataset. The results reveal differences in model performance degradation and expand the limited Slovak summarization resources and models.