@inproceedings{miao-kan-2025-discursive,
    title = "Discursive Circuits: How Do Language Models Understand Discourse Relations?",
    author = "Miao, Yisong  and
      Kan, Min-Yen",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1657/",
    pages = "32558--32577",
    ISBN = "979-8-89176-332-6",
    abstract = "Which components in transformer language models are responsible for discourse understanding? We hypothesize that sparse computational graphs, termed as discursive circuits, control how models process discourse relations. Unlike simpler tasks, discourse relations involve longer spans and complex reasoning. To make circuit discovery feasible, we introduce a task called Completion under Discourse Relation (CuDR), where a model completes a discourse given a specified relation. To support this task, we construct a corpus of minimal contrastive pairs tailored for activation patching in circuit discovery. Experiments show that sparse circuits ({\ensuremath{\approx}}0.2{\%} of a full GPT-2 model) recover discourse understanding in the English PDTB-based CuDR task. These circuits generalize well to unseen discourse frameworks such as RST and SDRT. Further analysis shows lower layers capture linguistic features such as lexical semantics and coreference, while upper layers encode discourse-level abstractions. Feature utility is consistent across frameworks (e.g., coreference supports Expansion-like relations)."
}Markdown (Informal)
[Discursive Circuits: How Do Language Models Understand Discourse Relations?](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1657/) (Miao & Kan, EMNLP 2025)
ACL