Abstract
In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoder-decoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.- Anthology ID:
- W18-6326
- Volume:
- Proceedings of the Third Conference on Machine Translation: Research Papers
- Month:
- October
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venue:
- WMT
- SIG:
- SIGMT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 253–260
- Language:
- URL:
- https://aclanthology.org/W18-6326
- DOI:
- 10.18653/v1/W18-6326
- Cite (ACL):
- Jindřich Libovický, Jindřich Helcl, and David Mareček. 2018. Input Combination Strategies for Multi-Source Transformer Decoder. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 253–260, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Input Combination Strategies for Multi-Source Transformer Decoder (Libovický et al., WMT 2018)
- PDF:
- https://preview.aclanthology.org/author-url/W18-6326.pdf