LLMs for Low Resource Languages in Multilingual, Multimodal and Dialectal Settings

Firoj Alam, Shammur Absar Chowdhury, Sabri Boughorbel, Maram Hasanain


Abstract
The recent breakthroughs in Artificial Intelligence (AI) can be attributed to the remarkable performance of Large Language Models (LLMs) across a spectrum of research areas (e.g., machine translation, question-answering, automatic speech recognition, text-to-speech generation) and application domains (e.g., business, law, healthcare, education, and psychology). The success of these LLMs largely de- pends on specific training techniques, most notably instruction tuning, RLHF, and subsequent prompting to achieve the desired output. As the development of such LLMs continues to increase in both closed and open settings, evaluation has become crucial for understanding their generalization capabilities across different tasks, modalities, languages, and dialects. This evaluation process is tightly coupled with prompting, which plays a key role in obtain- ing better outputs. There has been attempts to evaluate such models focusing on diverse tasks, languages, and dialects, which suggests that the capabilities of LLMs are still limited to medium-to-low-resource languages due to the lack of representative datasets. The tutorial offers an overview of this emerging research area. We explore the capabilities of LLMs in terms of their performance, zero- and few-shot settings, fine-tuning, instructions tuning, and close vs. open models with a special emphasis on low-resource settings. In addition to LLMs for standard NLP tasks, we will focus on speech and multimodality.
Anthology ID:
2024.eacl-tutorials.5
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Mohsen Mesgar, Sharid Loáiciga
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27–33
Language:
URL:
https://aclanthology.org/2024.eacl-tutorials.5
DOI:
Bibkey:
Cite (ACL):
Firoj Alam, Shammur Absar Chowdhury, Sabri Boughorbel, and Maram Hasanain. 2024. LLMs for Low Resource Languages in Multilingual, Multimodal and Dialectal Settings. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts, pages 27–33, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
LLMs for Low Resource Languages in Multilingual, Multimodal and Dialectal Settings (Alam et al., EACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2024.eacl-tutorials.5.pdf