Evaluating CxG Generalisation in LLMs via Construction-Based NLI Fine Tuning

Tom Mackintosh, Harish Tayyar Madabushi, Claire Bonial


Abstract
We probe large language models’ ability to learn deep form-meaning mappings as defined by construction grammars. We introduce the ConTest-NLI benchmark of 80k sentences covering eight English constructions from highly lexicalized to highly schematic. Our pipeline generates diverse synthetic NLI triples via templating and the application of a model-in-the loop filter. This provides aspects of human validation to ensure challenge and label reliability. Zero-shot tests on leading LLMs reveal a 24% drop in accuracy between naturalistic (88%) and adversarial data (64%), with schematic patterns proving hardest. Fine-tuning on a subset of ConTest-NLI yields up to 9% improvement, yet our results highlight persistent abstraction gaps in current LLMs and offer a scalable framework for evaluating construction informed learning.
Anthology ID:
2025.cxgsnlp-1.19
Volume:
Proceedings of the Second International Workshop on Construction Grammars and NLP
Month:
September
Year:
2025
Address:
Düsseldorf, Germany
Editors:
Claire Bonial, Melissa Torgbi, Leonie Weissweiler, Austin Blodgett, Katrien Beuls, Paul Van Eecke, Harish Tayyar Madabushi
Venues:
CxGsNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
180–189
Language:
URL:
https://preview.aclanthology.org/iwcs-25-ingestion/2025.cxgsnlp-1.19/
DOI:
Bibkey:
Cite (ACL):
Tom Mackintosh, Harish Tayyar Madabushi, and Claire Bonial. 2025. Evaluating CxG Generalisation in LLMs via Construction-Based NLI Fine Tuning. In Proceedings of the Second International Workshop on Construction Grammars and NLP, pages 180–189, Düsseldorf, Germany. Association for Computational Linguistics.
Cite (Informal):
Evaluating CxG Generalisation in LLMs via Construction-Based NLI Fine Tuning (Mackintosh et al., CxGsNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/iwcs-25-ingestion/2025.cxgsnlp-1.19.pdf