Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

Sandro Pezzelle, Shane Steinert-Threlkeld, Raffaella Bernardi, Jakub Szymanik


Abstract
We study the role of linguistic context in predicting quantifiers (‘few’, ‘all’). We collect crowdsourced data from human participants and test various models in a local (single-sentence) and a global context (multi-sentence) condition. Models significantly out-perform humans in the former setting and are only slightly better in the latter. While human performance improves with more linguistic context (especially on proportional quantifiers), model performance suffers. Models are very effective in exploiting lexical and morpho-syntactic patterns; humans are better at genuinely understanding the meaning of the (global) context.
Anthology ID:
P18-2019
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–119
Language:
URL:
https://aclanthology.org/P18-2019
DOI:
10.18653/v1/P18-2019
Bibkey:
Cite (ACL):
Sandro Pezzelle, Shane Steinert-Threlkeld, Raffaella Bernardi, and Jakub Szymanik. 2018. Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 114–119, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers (Pezzelle et al., ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/P18-2019.pdf
Poster:
 P18-2019.Poster.pdf
Code
 sandropezzelle/fill-in-the-quant