Abstract
Deep NLP models benefit from underlying structures in the data—e.g., parse trees—typically extracted using off-the-shelf parsers. Recent attempts to jointly learn the latent structure encounter a tradeoff: either make factorization assumptions that limit expressiveness, or sacrifice end-to-end differentiability. Using the recently proposed SparseMAP inference, which retrieves a sparse distribution over latent structures, we propose a novel approach for end-to-end learning of latent structure predictors jointly with a downstream predictor. To the best of our knowledge, our method is the first to enable unrestricted dynamic computation graph construction from the global latent structure, while maintaining differentiability.- Anthology ID:
- D18-1108
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 905–911
- Language:
- URL:
- https://aclanthology.org/D18-1108
- DOI:
- 10.18653/v1/D18-1108
- Cite (ACL):
- Vlad Niculae, André F. T. Martins, and Claire Cardie. 2018. Towards Dynamic Computation Graphs via Sparse Latent Structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 905–911, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Towards Dynamic Computation Graphs via Sparse Latent Structure (Niculae et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/D18-1108.pdf
- Code
- vene/sparsemap
- Data
- SNLI, SST