Learning Semantic Structure through First-Order-Logic Translation

Akshay Chaturvedi, Nicholas Asher


Abstract
In this paper, we study whether transformer-based language models can extract predicate argument structure from simple sentences. We firstly show that language models sometimes confuse which predicates apply to which objects. To mitigate this, we explore two tasks: question answering (Q/A), and first order logic (FOL) translation, and two regimes, prompting and finetuning. In FOL translation, we finetune several large language models on synthetic datasets designed to gauge their generalization abilities. For Q/A, we finetune encoder models like BERT and RoBERTa and use prompting for LLMs. The results show that FOL translation for LLMs is better suited to learn predicate argument structure.
Anthology ID:
2024.findings-emnlp.390
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6669–6680
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.390
DOI:
10.18653/v1/2024.findings-emnlp.390
Bibkey:
Cite (ACL):
Akshay Chaturvedi and Nicholas Asher. 2024. Learning Semantic Structure through First-Order-Logic Translation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6669–6680, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Learning Semantic Structure through First-Order-Logic Translation (Chaturvedi & Asher, Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2024.findings-emnlp.390.pdf
Data:
 2024.findings-emnlp.390.data.zip