Assessing Minimal Pairs of Chinese Verb-Resultative Complement Constructions: Insights from Language Models

Xinyao Huang, Yue Pan, Stefan Hartmann, Yang Yanning


Abstract
Chinese verb-resultative complement construction (VRCC), constitute a distinctive syntactic-semantic pattern in Chinese that integrates agent-patient dynamics with real-world state changes; yet widely used benchmarks such as CLiMP and ZhoBLiMP provide few minimal-pair probes tailored to these constructions. We introduce ZhVrcMP, a 1,204 pair dataset spanning two paradigms: resultative complement presence versus absence, and verb–complement order. The examples are drawn from Modern Chinese and are annotated for linguistic validity. Using mean log probability scoring, we evaluate Zh-Pythia models (14M-1.4B) and Mistral-7B-Instruct-v0.3. Larger Zh-Pythia models perform strongly, especially on the order paradigm, reaching 89.87% accuracy. Mistral-7B-Instruct-v0.3 shows lower perplexity yet overall weaker accuracy, underscoring the remaining difficulty of modeling constructional semantics in Chinese.
Anthology ID:
2025.cxgsnlp-1.14
Volume:
Proceedings of the Second International Workshop on Construction Grammars and NLP
Month:
September
Year:
2025
Address:
Düsseldorf, Germany
Editors:
Claire Bonial, Melissa Torgbi, Leonie Weissweiler, Austin Blodgett, Katrien Beuls, Paul Van Eecke, Harish Tayyar Madabushi
Venues:
CxGsNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
144–150
Language:
URL:
https://preview.aclanthology.org/iwcs-25-ingestion/2025.cxgsnlp-1.14/
DOI:
Bibkey:
Cite (ACL):
Xinyao Huang, Yue Pan, Stefan Hartmann, and Yang Yanning. 2025. Assessing Minimal Pairs of Chinese Verb-Resultative Complement Constructions: Insights from Language Models. In Proceedings of the Second International Workshop on Construction Grammars and NLP, pages 144–150, Düsseldorf, Germany. Association for Computational Linguistics.
Cite (Informal):
Assessing Minimal Pairs of Chinese Verb-Resultative Complement Constructions: Insights from Language Models (Huang et al., CxGsNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/iwcs-25-ingestion/2025.cxgsnlp-1.14.pdf