“You Are An Expert Linguistic Annotator”: Limits of LLMs as Analyzers of Abstract Meaning Representation

Allyson Ettinger, Jena Hwang, Valentina Pyatkin, Chandra Bhagavatula, Yejin Choi


Abstract
Large language models (LLMs) demonstrate an amazing proficiency and fluency in the use of language. Does that mean that they have also acquired insightful linguistic knowledge about the language, to an extent that they can serve as an “expert linguistic annotator’? In this paper, we examine the successes and limitations of the GPT-3, ChatGPT, and GPT-4 models, focusing on the Abstract Meaning Representation (AMR) parsing formalism (Banarescu et al., 2013), which provides rich graphical representations of sentence meaning structure while abstracting away from surface forms. We compare models’ analysis of this semantic structure across two settings: 1) direct production of AMR parses based on zero- and few-shot examples, and 2) indirect partial reconstruction of AMR via metalinguistic natural language queries (e.g., “Identify the primary event of this sentence, and the predicate corresponding to that event.”). Across these settings, we find that models can reliably reproduce the basic format of AMR, as well as some core event, argument, and modifier structure-however, model outputs are prone to frequent and major errors, and holistic analysis of parse acceptability shows that even with few-shot demonstrations, models have virtually 0% success in producing fully accurate parses. Eliciting responses in natural language produces similar patterns of errors. Overall, our findings indicate that these models out-of-the-box can accurately identify some core aspects of semantic structure, but there remain key limitations in their ability to support fully accurate semantic analyses or parses.
Anthology ID:
2023.findings-emnlp.553
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8250–8263
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.553
DOI:
10.18653/v1/2023.findings-emnlp.553
Bibkey:
Cite (ACL):
Allyson Ettinger, Jena Hwang, Valentina Pyatkin, Chandra Bhagavatula, and Yejin Choi. 2023. “You Are An Expert Linguistic Annotator”: Limits of LLMs as Analyzers of Abstract Meaning Representation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8250–8263, Singapore. Association for Computational Linguistics.
Cite (Informal):
“You Are An Expert Linguistic Annotator”: Limits of LLMs as Analyzers of Abstract Meaning Representation (Ettinger et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.553.pdf