David Basil
2026
Word Surprisal Correlates with Sentential Contradiction in LLMs
Ning Shi | Bradley Hauer | David Basil | John Zhang | Grzegorz Kondrak
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Ning Shi | Bradley Hauer | David Basil | John Zhang | Grzegorz Kondrak
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) continue to achieve impressive performance on reasoning benchmarks, yet it remains unclear how their predictions capture semantic consistency between sentences. We investigate the important open question of whether word-level surprisal correlates with sentence-level contradiction between a premise and a hypothesis. Specifically, we compute surprisal for hypothesis words across a diverse set of experimental variants, and analyze its association with contradiction labels over multiple datasets and open-source LLMs. Because modern LLMs operate on subword tokens and can not directly produce reliable surprisal estimates, we introduce a token-to-word decoding algorithm that extends theoretically grounded probability estimation to open-vocabulary settings. Experiments show a consistent and statistically significant positive correlation between surprisal and contradiction across models and domains. Our analysis also provides new insights into the capabilities and limitations of current LLMs. Together, our findings suggest that surprisal can localize sentence-level inconsistency at the word level, establishing a quantitative link between lexical uncertainty and sentential semantics. We plan to release our code and data upon publication.
2025
UAlberta at SemEval-2025 Task 2: Prompting and Ensembling for Entity-Aware Translation
Ning Shi | David Basil | Bradley Hauer | Noshin Nawal | Jai Riley | Daniela Teodorescu | John Zhang | Grzegorz Kondrak
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Ning Shi | David Basil | Bradley Hauer | Noshin Nawal | Jai Riley | Daniela Teodorescu | John Zhang | Grzegorz Kondrak
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
We describe the methods used by our UAlberta team for the SemEval-2025 Task 2 on Entity-Aware Machine Translation (EA-MT). Our methods leverage large language models with prompt engineering strategies suited to this task, including retrieval augmented generation and in-context learning. Our best results overall are obtained with ensembles of multiple models, leveraging named entity knowledge in the dataset. Finally, we provide proof-of-concept experiments showing that lexico-semantic knowledge can be used to identify high-quality translations. We further demonstrate that our methods can function even without gold named entity translations, by using an alternative knowledge base such as BabelNet.