Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation

Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, Tian Gao

[How to correct problems with metadata yourself]


Abstract
Introducing common sense to natural language understanding systems has received increasing research attention. It remains a fundamental question on how to evaluate whether a system has the sense-making capability. Existing benchmarks measure common sense knowledge indirectly or without reasoning. In this paper, we release a benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. In addition, a system is asked to identify the most crucial reason why a statement does not make sense. We evaluate models trained over large-scale language modeling tasks as well as human performance, showing that there are different challenges for system sense-making.
Anthology ID:
P19-1393
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4020–4026
Language:
URL:
https://aclanthology.org/P19-1393
DOI:
10.18653/v1/P19-1393
Bibkey:
Cite (ACL):
Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019. Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4020–4026, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation (Wang et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/P19-1393.pdf
Code
 wangcunxiang/Sen-Making-and-Explanation +  additional community code
Data
COPAConceptNetWSC