@inproceedings{he-etal-2025-delta,
    title = "{D}e{LT}a: A Decoding Strategy based on Logit Trajectory Prediction Improves Factuality and Reasoning Ability",
    author = "He, Yunzhen  and
      Takase, Yusuke  and
      Shimodaira, Hidetoshi",
    editor = "Noidea, Noidea",
    booktitle = "Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.26/",
    pages = "309--321",
    ISBN = "979-8-89176-349-4",
    abstract = "Large Language Models (LLMs) are increasingly being used in real-world applications. However, concerns about the reliability of the content they generate persist, as it frequently deviates from factual correctness or exhibits deficiencies in logical reasoning. This paper proposes a novel decoding strategy aimed at enhancing both factual accuracy and inferential reasoning without requiring any modifications to the architecture or pre-trained parameters of LLMs. Our approach adjusts next-token probabilities by analyzing the trajectory of logits from lower to higher layers in Transformers and applying linear regression. We find that this Decoding by Logit Trajectory-based approach (DeLTa) effectively reinforces factuality and reasoning while mitigating incorrect generation. Experiments on TruthfulQA demonstrate that DeLTa attains up to a 4.9{\%} improvement over the baseline. Furthermore, it enhances performance by up to 8.1{\%} on StrategyQA and 7.3{\%} on GSM8K, both of which demand strong reasoning capabilities."
}Markdown (Informal)
[DeLTa: A Decoding Strategy based on Logit Trajectory Prediction Improves Factuality and Reasoning Ability](https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.26/) (He et al., UncertaiNLP 2025)
ACL