Abstract
Modularity is a paradigm of machine translation with the potential of bringing forth models that are large at training time and small during inference. Within this field of study, modular approaches, and in particular attention bridges, have been argued to improve the generalization capabilities of models by fostering language-independent representations. In the present paper, we study whether modularity affects translation quality; as well as how well modular architectures generalize across different evaluation scenarios. For a given computational budget, we find non-modular architectures to be always comparable or preferable to all modular designs we study.- Anthology ID:
- 2024.insights-1.5
- Volume:
- Proceedings of the Fifth Workshop on Insights from Negative Results in NLP
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
- Venues:
- insights | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 34–40
- Language:
- URL:
- https://aclanthology.org/2024.insights-1.5
- DOI:
- 10.18653/v1/2024.insights-1.5
- Cite (ACL):
- Timothee Mickus, Raul Vazquez, and Joseph Attieh. 2024. I Have an Attention Bridge to Sell You: Generalization Capabilities of Modular Translation Architectures. In Proceedings of the Fifth Workshop on Insights from Negative Results in NLP, pages 34–40, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- I Have an Attention Bridge to Sell You: Generalization Capabilities of Modular Translation Architectures (Mickus et al., insights-WS 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.insights-1.5.pdf