Abstract
In NLP, incremental processors produce output in instalments, based on incoming prefixes of the linguistic input. Some tokens trigger revisions, causing edits to the output hypothesis, but little is known about why models revise when they revise. A policy that detects the time steps where revisions should happen can improve efficiency. Still, retrieving a suitable signal to train a revision policy is an open problem, since it is not naturally available in datasets. In this work, we investigate the appropriateness of regressions and skips in human reading eye-tracking data as signals to inform revision policies in incremental sequence labelling. Using generalised mixed-effects models, we find that the probability of regressions and skips by humans can potentially serve as useful predictors for revisions in BiLSTMs and Transformer models, with consistent results for various languages.- Anthology ID:
- 2023.conll-1.22
- Volume:
- Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Jing Jiang, David Reitter, Shumin Deng
- Venue:
- CoNLL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 335–351
- Language:
- URL:
- https://aclanthology.org/2023.conll-1.22
- DOI:
- 10.18653/v1/2023.conll-1.22
- Cite (ACL):
- Brielen Madureira, Pelin Çelikkol, and David Schlangen. 2023. Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 335–351, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing (Madureira et al., CoNLL 2023)
- PDF:
- https://preview.aclanthology.org/landing_page/2023.conll-1.22.pdf