@inproceedings{park-kim-2025-llms,
    title = "Where do {LLM}s Encode the Knowledge to Assess the Ambiguity?",
    author = "Park, Hancheol  and
      Kim, Geonmin",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven  and
      Darwish, Kareem  and
      Agarwal, Apoorv",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics: Industry Track",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.coling-industry.38/",
    pages = "445--452",
    abstract = "Recently, large language models (LLMs) have shown remarkable performance across various natural language processing tasks, thanks to their vast amount of knowledge. Nevertheless, they often generate unreliable responses. A common example is providing a single biased answer to an ambiguous question that could have multiple correct answers. To address this issue, in this study, we discuss methods to detect such ambiguous samples. More specifically, we propose a classifier that uses a representation from an intermediate layer of the LLM as input. This is based on observations from previous research that representations of ambiguous samples in intermediate layers are closer to those of relevant label samples in the embedding space, but not necessarily in higher layers. The experimental results demonstrate that using representations from intermediate layers detects ambiguous input prompts more effectively than using representations from the final layer. Furthermore, in this study, we propose a method to train such classifiers without ambiguity labels, as most datasets lack labels regarding the ambiguity of samples, and evaluate its effectiveness."
}Markdown (Informal)
[Where do LLMs Encode the Knowledge to Assess the Ambiguity?](https://preview.aclanthology.org/ingest-emnlp/2025.coling-industry.38/) (Park & Kim, COLING 2025)
ACL