What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models?

Ruixiang Cui, Seolhwa Lee, Daniel Hershcovich, Anders Søgaard


Abstract
Humans can effortlessly understand the coordinate structure of sentences such as “Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, *respectively*”. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of “respectively”. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.
Anthology ID:
2023.acl-long.489
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8786–8800
Language:
URL:
https://aclanthology.org/2023.acl-long.489
DOI:
10.18653/v1/2023.acl-long.489
Bibkey:
Cite (ACL):
Ruixiang Cui, Seolhwa Lee, Daniel Hershcovich, and Anders Søgaard. 2023. What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8786–8800, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models? (Cui et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.acl-long.489.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2023.acl-long.489.mp4