mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models

Huiyuan Lai, Malvina Nissim


Abstract
Large language models (LLMs) with Chain-of-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks. As most research mainly focuses on English, with few explorations in a multilingual context, the question of how reliable this reasoning capability is in different languages is still open. To address it directly, we study multilingual reasoning consistency across multiple languages, using popular open-source LLMs. First, we compile the first large-scale multilingual math reasoning dataset, *mCoT-MATH*, covering eleven diverse languages. Then, we introduce multilingual CoT instruction tuning to boost reasoning capability across languages, thereby improving model consistency. While existing LLMs show substantial variation across the languages we consider, and especially low performance for lesser resourced languages, our 7B parameter model *mCoT* achieves impressive consistency across languages, and superior or comparable performance to close- and open-source models even of much larger sizes.
Anthology ID:
2024.acl-long.649
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12012–12026
Language:
URL:
https://aclanthology.org/2024.acl-long.649
DOI:
10.18653/v1/2024.acl-long.649
Bibkey:
Cite (ACL):
Huiyuan Lai and Malvina Nissim. 2024. mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12012–12026, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models (Lai & Nissim, ACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2024.acl-long.649.pdf