Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection
Jianguo Zhang, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu, Caiming Xiong, Philip Yu
Abstract
Pre-trained Transformer-based models were reported to be robust in intent classification. In this work, we first point out the importance of in-domain out-of-scope detection in few-shot intent recognition tasks and then illustrate the vulnerability of pre-trained Transformer-based models against samples that are in-domain but out-of-scope (ID-OOS). We construct two new datasets, and empirically show that pre-trained models do not perform well on both ID-OOS examples and general out-of-scope examples, especially on fine-grained few-shot intent detection tasks.- Anthology ID:
- 2022.nlp4convai-1.2
- Volume:
- Proceedings of the 4th Workshop on NLP for Conversational AI
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- NLP4ConvAI
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 12–20
- Language:
- URL:
- https://aclanthology.org/2022.nlp4convai-1.2
- DOI:
- 10.18653/v1/2022.nlp4convai-1.2
- Cite (ACL):
- Jianguo Zhang, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu, Caiming Xiong, and Philip Yu. 2022. Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 12–20, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection (Zhang et al., NLP4ConvAI 2022)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/2022.nlp4convai-1.2.pdf
- Data
- BANKING77