Zhangdie Yuan
2023
Can Pretrained Language Models (Yet) Reason Deductively?
Zhangdie Yuan
|
Songbo Hu
|
Ivan Vulić
|
Anna Korhonen
|
Zaiqiao Meng
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks. Their good performance has led the community to believe that the models do possess a modicum of reasoning competence rather than merely memorising the knowledge. In this paper, we conduct a comprehensive evaluation of the learnable deductive (also known as explicit) reasoning capability of PLMs. Through a series of controlled experiments, we posit two main findings. 1) PLMs inadequately generalise learned logic rules and perform inconsistently against simple adversarial surface form edits. 2) While the deductive reasoning fine-tuning of PLMs does improve their performance on reasoning over unseen knowledge facts, it results in catastrophically forgetting the previously learnt knowledge. Our main results suggest that PLMs cannot yet perform reliable deductive reasoning, demonstrating the importance of controlled examinations and probing of PLMs’ deductive reasoning abilities; we reach beyond (misleading) task performance, revealing that PLMs are still far from robust reasoning capabilities, even for simple deductive tasks.
2022
Varifocal Question Generation for Fact-checking
Nedjma Ousidhoum
|
Zhangdie Yuan
|
Andreas Vlachos
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Fact-checking requires retrieving evidence related to a claim under investigation. The task can be formulated as question generation based on a claim, followed by question answering.However, recent question generation approaches assume that the answer is known and typically contained in a passage given as input,whereas such passages are what is being sought when verifying a claim.In this paper, we present Varifocal, a method that generates questions based on different focal points within a given claim, i.e. different spans of the claim and its metadata, such as its source and date.Our method outperforms previous work on a fact-checking question generation dataset on a wide range of automatic evaluation metrics.These results are corroborated by our manual evaluation, which indicates that our method generates more relevant and informative questions.We further demonstrate the potential of focal points in generating sets of clarification questions for product descriptions.
Search
Co-authors
- Songbo Hu 1
- Ivan Vulić 1
- Anna Korhonen 1
- Zaiqiao Meng 1
- Nedjma Ousidhoum 1
- show all...