A Pragmatics-Centered Evaluation Framework for Natural Language Understanding

Damien Sileo, Philippe Muller, Tim Van de Cruys, Camille Pradel


Abstract
New models for natural language understanding have recently made an unparalleled amount of progress, which has led some researchers to suggest that the models induce universal text representations. However, current benchmarks are predominantly targeting semantic phenomena; we make the case that pragmatics needs to take center stage in the evaluation of natural language understanding. We introduce PragmEval, a new benchmark for the evaluation of natural language understanding, that unites 11 pragmatics-focused evaluation datasets for English. PragmEval can be used as supplementary training data in a multi-task learning setup, and is publicly available, alongside the code for gathering and preprocessing the datasets. Using our evaluation suite, we show that natural language inference, a widely used pretraining task, does not result in genuinely universal representations, which presents a new challenge for multi-task learning.
Anthology ID:
2022.lrec-1.255
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
2382–2394
Language:
URL:
https://aclanthology.org/2022.lrec-1.255
DOI:
Bibkey:
Cite (ACL):
Damien Sileo, Philippe Muller, Tim Van de Cruys, and Camille Pradel. 2022. A Pragmatics-Centered Evaluation Framework for Natural Language Understanding. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2382–2394, Marseille, France. European Language Resources Association.
Cite (Informal):
A Pragmatics-Centered Evaluation Framework for Natural Language Understanding (Sileo et al., LREC 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.lrec-1.255.pdf
Code
 synapse-developpement/DiscEval +  additional community code
Data
DiscoveryEmoBankGLUEMultiNLI