@inproceedings{li-jiang-2020-two,
    title = "Two-Headed Monster and Crossed Co-Attention Networks",
    author = "Li, Yaoyiran  and
      Jiang, Jing",
    editor = "Shmueli, Boaz  and
      Huang, Yin Jou",
    booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop",
    month = dec,
    year = "2020",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2020.aacl-srw.2/",
    doi = "10.18653/v1/2020.aacl-srw.2",
    pages = "8--15",
    abstract = "This paper investigates a new co-attention mechanism in neural transduction models for machine translation tasks. We propose a paradigm, termed Two-Headed Monster (THM), which consists of two symmetric encoder modules and one decoder module connected with co-attention. As a specific and concrete implementation of THM, Crossed Co-Attention Networks (CCNs) are designed based on the Transformer model. We test CCNs on WMT 2014 EN-DE and WMT 2016 EN-FI translation tasks and show both advantages and disadvantages of the proposed method. Our model outperforms the strong Transformer baseline by 0.51 (big) and 0.74 (base) BLEU points on EN-DE and by 0.17 (big) and 0.47 (base) BLEU points on EN-FI but the epoch time increases by circa 75{\%}."
}Markdown (Informal)
[Two-Headed Monster and Crossed Co-Attention Networks](https://preview.aclanthology.org/ingest-emnlp/2020.aacl-srw.2/) (Li & Jiang, AACL 2020)
ACL
- Yaoyiran Li and Jing Jiang. 2020. Two-Headed Monster and Crossed Co-Attention Networks. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 8–15, Suzhou, China. Association for Computational Linguistics.