Modeling Five Sentence Quality Representations by Finding Latent Spaces Produced with Deep Long Short-Memory Models
Abstract
We present a study in which we train neural models that approximate rules that assess the quality of English sentences. We modeled five rules using deep LSTMs trained over a dataset of sentences whose quality is evaluated under such rules. Preliminary results suggest the neural architecture can model such rules to high accuracy.- Anthology ID:
- W19-3610
- Volume:
- Proceedings of the 2019 Workshop on Widening NLP
- Month:
- August
- Year:
- 2019
- Address:
- Florence, Italy
- Editors:
- Amittai Axelrod, Diyi Yang, Rossana Cunha, Samira Shaikh, Zeerak Waseem
- Venue:
- WiNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 24–26
- Language:
- URL:
- https://aclanthology.org/W19-3610
- DOI:
- Cite (ACL):
- Pablo Rivas. 2019. Modeling Five Sentence Quality Representations by Finding Latent Spaces Produced with Deep Long Short-Memory Models. In Proceedings of the 2019 Workshop on Widening NLP, pages 24–26, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- Modeling Five Sentence Quality Representations by Finding Latent Spaces Produced with Deep Long Short-Memory Models (Rivas, WiNLP 2019)