Abstract
This paper investigates a new co-attention mechanism in neural transduction models for machine translation tasks. We propose a paradigm, termed Two-Headed Monster (THM), which consists of two symmetric encoder modules and one decoder module connected with co-attention. As a specific and concrete implementation of THM, Crossed Co-Attention Networks (CCNs) are designed based on the Transformer model. We test CCNs on WMT 2014 EN-DE and WMT 2016 EN-FI translation tasks and show both advantages and disadvantages of the proposed method. Our model outperforms the strong Transformer baseline by 0.51 (big) and 0.74 (base) BLEU points on EN-DE and by 0.17 (big) and 0.47 (base) BLEU points on EN-FI but the epoch time increases by circa 75%.- Anthology ID:
- 2020.aacl-srw.2
- Volume:
- Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop
- Month:
- December
- Year:
- 2020
- Address:
- Suzhou, China
- Venue:
- AACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8–15
- Language:
- URL:
- https://aclanthology.org/2020.aacl-srw.2
- DOI:
- Cite (ACL):
- Yaoyiran Li and Jing Jiang. 2020. Two-Headed Monster and Crossed Co-Attention Networks. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 8–15, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Two-Headed Monster and Crossed Co-Attention Networks (Li & Jiang, AACL 2020)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2020.aacl-srw.2.pdf