@inproceedings{boughorbel-hawasly-2023-analyzing,
    title = "Analyzing Multilingual Competency of {LLM}s in Multi-Turn Instruction Following: A Case Study of {A}rabic",
    author = "Boughorbel, Sabri  and
      Hawasly, Majd",
    editor = "Sawaf, Hassan  and
      El-Beltagy, Samhaa  and
      Zaghouani, Wajdi  and
      Magdy, Walid  and
      Abdelali, Ahmed  and
      Tomeh, Nadi  and
      Abu Farha, Ibrahim  and
      Habash, Nizar  and
      Khalifa, Salam  and
      Keleg, Amr  and
      Haddad, Hatem  and
      Zitouni, Imed  and
      Mrini, Khalil  and
      Almatham, Rawan",
    booktitle = "Proceedings of ArabicNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.arabicnlp-1.11/",
    doi = "10.18653/v1/2023.arabicnlp-1.11",
    pages = "128--139",
    abstract = "While significant progress has been made in benchmarking Large Language Models (LLMs) across various tasks, there is a lack of comprehensive evaluation of their abilities in responding to multi-turn instructions in less-commonly tested languages like Arabic. Our paper offers a detailed examination of the proficiency of open LLMs in such scenarios in Arabic. Utilizing a customized Arabic translation of the MT-Bench benchmark suite, we employ GPT-4 as a uniform evaluator for both English and Arabic queries to assess and compare the performance of the LLMs on various open-ended tasks. Our findings reveal variations in model responses on different task categories, e.g., logic vs. literacy, when instructed in English or Arabic. We find that fine-tuned base models using multilingual and multi-turn datasets could be competitive to models trained from scratch on multilingual data. Finally, we hypothesize that an ensemble of small, open LLMs could perform competitively to proprietary LLMs on the benchmark."
}Markdown (Informal)
[Analyzing Multilingual Competency of LLMs in Multi-Turn Instruction Following: A Case Study of Arabic](https://preview.aclanthology.org/ingest-emnlp/2023.arabicnlp-1.11/) (Boughorbel & Hawasly, ArabicNLP 2023)
ACL