@inproceedings{noble-maraev-2021-large,
    title = "Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning",
    author = "Noble, Bill  and
      Maraev, Vladislav",
    editor = "Zarrie{\ss}, Sina  and
      Bos, Johan  and
      van Noord, Rik  and
      Abzianidze, Lasha",
    booktitle = "Proceedings of the 14th International Conference on Computational Semantics (IWCS)",
    month = jun,
    year = "2021",
    address = "Groningen, The Netherlands (online)",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.iwcs-1.16/",
    pages = "166--172",
    abstract = "We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific fine-tuning is essential for good performance."
}Markdown (Informal)
[Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning](https://preview.aclanthology.org/ingest-emnlp/2021.iwcs-1.16/) (Noble & Maraev, IWCS 2021)
ACL