Abstract
Automatic grammatical error correction (GEC) research has made remarkable progress in the past decade. However, all existing approaches to GEC correct errors by considering a single sentence alone and ignoring crucial cross-sentence context. Some errors can only be corrected reliably using cross-sentence context and models can also benefit from the additional contextual information in correcting other errors. In this paper, we address this serious limitation of existing approaches and improve strong neural encoder-decoder models by appropriately modeling wider contexts. We employ an auxiliary encoder that encodes previous sentences and incorporate the encoding in the decoder via attention and gating mechanisms. Our approach results in statistically significant improvements in overall GEC performance over strong baselines across multiple test sets. Analysis of our cross-sentence GEC model on a synthetic dataset shows high performance in verb tense corrections that require cross-sentence context.- Anthology ID:
- P19-1042
- Volume:
- Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2019
- Address:
- Florence, Italy
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 435–445
- Language:
- URL:
- https://aclanthology.org/P19-1042
- DOI:
- 10.18653/v1/P19-1042
- Cite (ACL):
- Shamil Chollampatt, Weiqi Wang, and Hwee Tou Ng. 2019. Cross-Sentence Grammatical Error Correction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 435–445, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- Cross-Sentence Grammatical Error Correction (Chollampatt et al., ACL 2019)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/P19-1042.pdf
- Code
- nusnlp/crosentgec
- Data
- FCE