Abstract
The input to a neural sequence-to-sequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM’s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores.- Anthology ID:
- D17-1145
- Volume:
- Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
- Month:
- September
- Year:
- 2017
- Address:
- Copenhagen, Denmark
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1380–1389
- Language:
- URL:
- https://aclanthology.org/D17-1145
- DOI:
- 10.18653/v1/D17-1145
- Cite (ACL):
- Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural Lattice-to-Sequence Models for Uncertain Inputs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1380–1389, Copenhagen, Denmark. Association for Computational Linguistics.
- Cite (Informal):
- Neural Lattice-to-Sequence Models for Uncertain Inputs (Sperber et al., EMNLP 2017)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/D17-1145.pdf