@inproceedings{li-etal-2023-pre,
    title = "Pre-training Multi-party Dialogue Models with Latent Discourse Inference",
    author = "Li, Yiyang  and
      Huang, Xinting  and
      Bi, Wei  and
      Zhao, Hai",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.acl-long.533/",
    doi = "10.18653/v1/2023.acl-long.533",
    pages = "9584--9599",
    abstract = "Multi-party dialogues are more difficult for models to understand than one-to-one two-party dialogues, since they involve multiple interlocutors, resulting in interweaving reply-to relations and information flows. To step over these obstacles, an effective way is to pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying. However, due to the lack of explicitly annotated discourse labels in multi-party dialogue corpora, previous works fail to scale up the pre-training process by putting aside the unlabeled multi-party conversational data for nothing. To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model by unsupervised latent variable inference methods. Experiments on multiple downstream tasks show that our pre-trained model outperforms strong baselines by large margins and achieves state-of-the-art (SOTA) results, justifying the effectiveness of our method. The official implementation of this paper is available at \url{https://github.com/EricLee8/MPD_EMVI}."
}Markdown (Informal)
[Pre-training Multi-party Dialogue Models with Latent Discourse Inference](https://preview.aclanthology.org/ingest-emnlp/2023.acl-long.533/) (Li et al., ACL 2023)
ACL