It is better to Verify: Semi-Supervised Learning with a human in the loop for large-scale NLU models

Verena Weber, Enrico Piovano, Melanie Bradford


Abstract
When a NLU model is updated, new utter- ances must be annotated to be included for training. However, manual annotation is very costly. We evaluate a semi-supervised learning workflow with a human in the loop in a produc- tion environment. The previous NLU model predicts the annotation of the new utterances, a human then reviews the predicted annotation. Only when the NLU prediction is assessed as incorrect the utterance is sent for human anno- tation. Experimental results show that the pro- posed workflow boosts the performance of the NLU model while significantly reducing the annotation volume. Specifically, in our setup, we see improvements of up to 14.16% for a recall-based metric and up to 9.57% for a F1- score based metric, while reducing the annota- tion volume by 97% and overall cost by 60% for each iteration.
Anthology ID:
2021.dash-1.2
Volume:
Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances
Month:
June
Year:
2021
Address:
Online
Editors:
Eduard Dragut, Yunyao Li, Lucian Popa, Slobodan Vucetic
Venue:
DaSH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8–15
Language:
URL:
https://aclanthology.org/2021.dash-1.2
DOI:
10.18653/v1/2021.dash-1.2
Bibkey:
Cite (ACL):
Verena Weber, Enrico Piovano, and Melanie Bradford. 2021. It is better to Verify: Semi-Supervised Learning with a human in the loop for large-scale NLU models. In Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, pages 8–15, Online. Association for Computational Linguistics.
Cite (Informal):
It is better to Verify: Semi-Supervised Learning with a human in the loop for large-scale NLU models (Weber et al., DaSH 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2021.dash-1.2.pdf