Shuyue Li
2023
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
Haoran Xu
|
Weiting Tan
|
Shuyue Li
|
Yunmo Chen
|
Benjamin Van Durme
|
Philipp Koehn
|
Kenton Murray
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Incorporating language-specific (LS) modules or Mixture-of-Experts (MoE) are proven methods to boost performance in multilingual model performance, but the scalability of these approaches to hundreds of languages or experts tends to be hard to manage. We present Language-specific Matrix Synthesis (LMS), a novel method that addresses the issue. LMS utilizes parameter-efficient and lightweight modules, reducing the number of parameters while outperforming existing methods, e.g., +1.73 BLEU over Switch Transformer on OPUS-100 multilingual translation. Additionally, we introduce Fuse Distillation (FD) to condense multilingual knowledge from multiple LS modules into a single shared module, improving model inference and storage efficiency. Our approach demonstrates superior scalability and performance compared to state-of-the-art methods.
Search
Co-authors
- Haoran Xu 1
- Weiting Tan 1
- Yunmo Chen 1
- Benjamin Van Durme 1
- Philipp Koehn 1
- show all...