Abstract
This paper aims to investigate the effectiveness of several machine translation (MT) models and aggregation methods in a multi-domain setting under fair conditions and explore a direction for tackling multi-domain MT. We mainly compare the performance of the single model approach by jointly training all domains and the multi-expert models approach with a particular aggregation strategy. We conduct experiments on multiple domain datasets and demonstrate that a combination of smaller domain expert models can outperform a larger model trained for all domain data.- Anthology ID:
- 2023.findings-emnlp.960
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 14393–14404
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.960
- DOI:
- 10.18653/v1/2023.findings-emnlp.960
- Cite (ACL):
- Ikumi Ito, Takumi Ito, Jun Suzuki, and Kentaro Inui. 2023. Investigating the Effectiveness of Multiple Expert Models Collaboration. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14393–14404, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Investigating the Effectiveness of Multiple Expert Models Collaboration (Ito et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/jeptaln-2024-ingestion/2023.findings-emnlp.960.pdf