To Train or Not to Train: Predicting the Performance of Massively Multilingual Models
Shantanu Patankar, Omkar Gokhale, Onkar Litake, Aditya Mandke, Dipali Kadam
- Anthology ID:
- 2022.sumeval-1.2
- Volume:
- Proceedings of the First Workshop on Scaling Up Multilingual Evaluation
- Month:
- November
- Year:
- 2022
- Address:
- Online
- Editors:
- Kabir Ahuja, Antonios Anastasopoulos, Barun Patra, Graham Neubig, Monojit Choudhury, Sandipan Dandapat, Sunayana Sitaram, Vishrav Chaudhary
- Venue:
- SUMEval
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8–12
- Language:
- URL:
- https://aclanthology.org/2022.sumeval-1.2
- DOI:
- Cite (ACL):
- Shantanu Patankar, Omkar Gokhale, Onkar Litake, Aditya Mandke, and Dipali Kadam. 2022. To Train or Not to Train: Predicting the Performance of Massively Multilingual Models. In Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, pages 8–12, Online. Association for Computational Linguistics.
- Cite (Informal):
- To Train or Not to Train: Predicting the Performance of Massively Multilingual Models (Patankar et al., SUMEval 2022)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2022.sumeval-1.2.pdf