Abstract
We examine the licensing of negative polarity items (NPIs) in large language models (LLMs) to enrich the picture of how models acquire NPIs as linguistic phenomena at the syntax-semantics interface. NPIs are a class of words which have a restricted distribution, appearing only in certain licensing contexts, prototypically negation. Unlike much of previous work which assumes NPIs and their licensing environments constitute unified classes, we consider NPI distribution in its full complexity: different NPIs are possible in different licensing environments. By studying this phenomenon across a broad range of models, we are able to explore which features of the model architecture, properties of the training data, and linguistic characteristics of the NPI phenomenon itself drive performance.- Anthology ID:
- 2023.blackboxnlp-1.25
- Volume:
- Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
- Venues:
- BlackboxNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 332–341
- Language:
- URL:
- https://aclanthology.org/2023.blackboxnlp-1.25
- DOI:
- 10.18653/v1/2023.blackboxnlp-1.25
- Cite (ACL):
- Deanna DeCarlo, William Palmer, Michael Wilson, and Bob Frank. 2023. NPIs Aren’t Exactly Easy: Variation in Licensing across Large Language Models. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 332–341, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- NPIs Aren’t Exactly Easy: Variation in Licensing across Large Language Models (DeCarlo et al., BlackboxNLP-WS 2023)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/2023.blackboxnlp-1.25.pdf