IT5: Text-to-text Pretraining for Italian Language Understanding and Generation

Gabriele Sarti, Malvina Nissim


Abstract
We introduce IT5, the first family of encoder-decoder transformer models pretrained specifically on Italian. We document and perform a thorough cleaning procedure for a large Italian corpus and use it to pretrain four IT5 model sizes. We then introduce the ItaGen benchmark, which includes a broad range of natural language understanding and generation tasks for Italian, and use it to evaluate the performance of IT5 models and multilingual baselines. We find monolingual IT5 models to provide the best scale-to-performance ratio across tested models, consistently outperforming their multilingual counterparts and setting a new state-of-the-art for Italian language generation.
Anthology ID:
2024.lrec-main.823
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
9422–9433
Language:
URL:
https://aclanthology.org/2024.lrec-main.823
DOI:
Bibkey:
Cite (ACL):
Gabriele Sarti and Malvina Nissim. 2024. IT5: Text-to-text Pretraining for Italian Language Understanding and Generation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 9422–9433, Torino, Italia. ELRA and ICCL.
Cite (Informal):
IT5: Text-to-text Pretraining for Italian Language Understanding and Generation (Sarti & Nissim, LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2024.lrec-main.823.pdf