TAPIR: Learning Adaptive Revision for Incremental Natural Language Understanding with a Two-Pass Model

Patrick Kahardipraja, Brielen Madureira, David Schlangen


Abstract
Language is by its very nature incremental in how it is produced and processed. This property can be exploited by NLP systems to produce fast responses, which has been shown to be beneficial for real-time interactive applications. Recent neural network-based approaches for incremental processing mainly use RNNs or Transformers. RNNs are fast but monotonic (cannot correct earlier output, which can be necessary in incremental processing). Transformers, on the other hand, consume whole sequences, and hence are by nature non-incremental. A restart-incremental interface that repeatedly passes longer input prefixes can be used to obtain partial outputs, while providing the ability to revise. However, this method becomes costly as the sentence grows longer. In this work, we propose the Two-pass model for AdaPtIve Revision (TAPIR) and introduce a method to obtain an incremental supervision signal for learning an adaptive revision policy. Experimental results on sequence labelling show that our model has better incremental performance and faster inference speed compared to restart-incremental Transformers, while showing little degradation on full sequences.
Anthology ID:
2023.findings-acl.257
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4173–4197
Language:
URL:
https://aclanthology.org/2023.findings-acl.257
DOI:
10.18653/v1/2023.findings-acl.257
Bibkey:
Cite (ACL):
Patrick Kahardipraja, Brielen Madureira, and David Schlangen. 2023. TAPIR: Learning Adaptive Revision for Incremental Natural Language Understanding with a Two-Pass Model. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4173–4197, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
TAPIR: Learning Adaptive Revision for Incremental Natural Language Understanding with a Two-Pass Model (Kahardipraja et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.findings-acl.257.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2023.findings-acl.257.mp4