Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models
Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata
Abstract
Developing a method for understanding the inner workings of black-box neural methods is an important research endeavor. Conventionally, many studies have used an attention matrix to interpret how Encoder-Decoder-based models translate a given source sentence to the corresponding target sentence. However, recent studies have empirically revealed that an attention matrix is not optimal for token-wise translation analyses. We propose a method that explicitly models the token-wise alignment between the source and target sequences to provide a better analysis. Experiments show that our method can acquire token-wise alignments that are superior to those of an attention mechanism.- Anthology ID:
- W18-5410
- Volume:
- Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
- Month:
- November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 74–81
- Language:
- URL:
- https://aclanthology.org/W18-5410
- DOI:
- 10.18653/v1/W18-5410
- Cite (ACL):
- Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, and Masaaki Nagata. 2018. Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 74–81, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models (Kiyono et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/W18-5410.pdf