Abstract
In recent times, large language models (LLMs) have shown impressive performance on various document-level tasks such as document classification, summarization, and question-answering. However, research on understanding their capabilities on the task of self-contradictions in long documents has been very limited. In this work, we introduce ContraDoc, the first human-annotated dataset to study self-contradictions in long documents across multiple domains, varying document lengths, self-contradiction types, and appearance scope. We then analyze the current capabilities of four state-of-the-art open-source and commercially available LLMs: GPT3.5, GPT4, PaLM2, and LLaMAv2 on this dataset. While GPT4 performs the best and can outperform humans on this task, we find that it is still unreliable and struggles with self-contradictions that require more nuance and context. We release the dataset and all the code associated with the experiments.- Anthology ID:
- 2024.naacl-long.362
- Volume:
- Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Kevin Duh, Helena Gomez, Steven Bethard
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6509–6523
- Language:
- URL:
- https://aclanthology.org/2024.naacl-long.362
- DOI:
- Cite (ACL):
- Jierui Li, Vipul Raheja, and Dhruv Kumar. 2024. ContraDoc: Understanding Self-Contradictions in Documents with Large Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 6509–6523, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- ContraDoc: Understanding Self-Contradictions in Documents with Large Language Models (Li et al., NAACL 2024)
- PDF:
- https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.naacl-long.362.pdf