@inproceedings{frydenlund-2025-language,
    title = "Language Models, Graph Searching, and Supervision Adulteration: When More Supervision is Less and How to Make More More",
    author = "Frydenlund, Arvid",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.1409/",
    doi = "10.18653/v1/2025.acl-long.1409",
    pages = "29011--29059",
    ISBN = "979-8-89176-251-0",
    abstract = "This work concerns the path-star task, a minimal example of searching over a graph. The graph, $G$, is star-shaped with $D$ arms radiating from a start node, $s$. A language model (LM) is given $G$, $s$, and a target node, $t$, which ends one of the arms and is tasked with generating the arm containing $t$. The minimal nature of this task means only a single choice needs to be made: which of the arms contains?Decoder-only LMs fail to solve this elementary task above $1/D$ chance due to a learned shortcut that absorbs training supervision. We show how this pathology is caused by excess supervision and present a series of solutions demonstrating that the task is solvable via decoder-only LMs. We find that the task{'}s minimal nature causes its difficulty, as it prevents task decomposition. Our solutions provide insight into the pathology and its implications for LMs trained via next-token prediction."
}Markdown (Informal)
[Language Models, Graph Searching, and Supervision Adulteration: When More Supervision is Less and How to Make More More](https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.1409/) (Frydenlund, ACL 2025)
ACL