Abstract
In recent years, neural models have often outperformed rule-based and classic Machine Learning approaches in NLG. These classic approaches are now often disregarded, for example when new neural models are evaluated. We argue that they should not be overlooked, since, for some tasks, well-designed non-neural approaches achieve better performance than neural ones. In this paper, the task of generating referring expressions in linguistic context is used as an example. We examined two very different English datasets (WEBNLG and WSJ), and evaluated each algorithm using both automatic and human evaluations. Overall, the results of these evaluations suggest that rule-based systems with simple rule sets achieve on-par or better performance on both datasets compared to state-of-the-art neural REG systems. In the case of the more realistic dataset, WSJ, a machine learning-based system with well-designed linguistic features performed best. We hope that our work can encourage researchers to consider non-neural models in future.- Anthology ID:
- 2022.acl-long.380
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5554–5567
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.380
- DOI:
- 10.18653/v1/2022.acl-long.380
- Cite (ACL):
- Fahime Same, Guanyi Chen, and Kees Van Deemter. 2022. Non-neural Models Matter: a Re-evaluation of Neural Referring Expression Generation Systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5554–5567, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Non-neural Models Matter: a Re-evaluation of Neural Referring Expression Generation Systems (Same et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2022.acl-long.380.pdf
- Data
- WebNLG