Abstract
We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific fine-tuning is essential for good performance.- Anthology ID:
- 2021.iwcs-1.16
- Volume:
- Proceedings of the 14th International Conference on Computational Semantics (IWCS)
- Month:
- June
- Year:
- 2021
- Address:
- Groningen, The Netherlands (online)
- Editors:
- Sina Zarrieß, Johan Bos, Rik van Noord, Lasha Abzianidze
- Venue:
- IWCS
- SIG:
- SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 166–172
- Language:
- URL:
- https://aclanthology.org/2021.iwcs-1.16
- DOI:
- Cite (ACL):
- Bill Noble and Vladislav Maraev. 2021. Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 166–172, Groningen, The Netherlands (online). Association for Computational Linguistics.
- Cite (Informal):
- Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning (Noble & Maraev, IWCS 2021)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2021.iwcs-1.16.pdf
- Data
- OpenSubtitles