@inproceedings{yu-etal-2025-prunecd,
    title = "{P}rune{CD}: Contrasting Pruned Self Model to Improve Decoding Factuality",
    author = "Yu, Byeongho  and
      Lee, Changhun  and
      Jin, Jun-gyu  and
      Park, Eunhyeok",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1651/",
    pages = "32450--32461",
    ISBN = "979-8-89176-332-6",
    abstract = "To mitigate the hallucination problem in large language models, DoLa exploits early exit logits from the same model as a contrastive prior. However, we found that these early exit logits tend to be flat, low in magnitude, and fail to reflect meaningful contrasts. To address this, we propose PruneCD, a novel contrastive decoding method that constructs the amateur model via layer pruning rather than early exit. This design leads to more informative and well-aligned logits, enabling more effective contrastive decoding. Through qualitative and quantitative analyses, we demonstrate that PruneCD consistently improves factuality with minimal inference overhead, offering a robust and practical approach to mitigating hallucinations in LLMs."
}Markdown (Informal)
[PruneCD: Contrasting Pruned Self Model to Improve Decoding Factuality](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1651/) (Yu et al., EMNLP 2025)
ACL