Comparing Approaches to Automatic Summarization in Less-Resourced Languages

Chester Palen-Michel, Constantine Lignos


Abstract
Automatic text summarization has achieved high performance in higher-resourced languages like English, but comparatively less attention has been given to summarization in less-resourced languages. This work compares a variety of approaches to summarization from zero-shot prompting of LLMs large and small to fine-tuning smaller models like mT5 with and without three data augmentation approaches and multilingual transfer. We also explore an LLM translation pipeline approach, translating from the source language to English, summarizing and translating back. Evaluating with five different metrics, we find that there is variation across LLMs in their performance at similar model sizes, that our multilingual fine-tuned mT5 baseline outperforms most other approaches including zero-shot LLM performance for most metrics, and that LLM as judge may be unreliable on less-resourced languages.
Anthology ID:
2026.lrec-main.270
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
3402–3422
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.270/
DOI:
Bibkey:
Cite (ACL):
Chester Palen-Michel and Constantine Lignos. 2026. Comparing Approaches to Automatic Summarization in Less-Resourced Languages. International Conference on Language Resources and Evaluation, main:3402–3422.
Cite (Informal):
Comparing Approaches to Automatic Summarization in Less-Resourced Languages (Palen-Michel & Lignos, LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.270.pdf