UI at SemEval-2020 Task 4: Commonsense Validation and Explanation by Exploiting Contradiction

Kerenza Doxolodeo, Rahmad Mahendra


Abstract
This paper describes our submissions into the ComVe challenge, the SemEval 2020 Task 4. This evaluation task consists of three sub-tasks that test commonsense comprehension by identifying sentences that do not make sense and explain why they do not. In subtask A, we use Roberta to find which sentence does not make sense. In subtask B, besides using BERT, we also experiment with replacing the dataset with MNLI when selecting the best explanation from the provided options why the given sentence does not make sense. In subtask C, we utilize the MNLI model from subtask B to evaluate the explanation generated by Roberta and GPT-2 by exploiting the contradiction of the sentence and their explanation. Our system submission records a performance of 88.2%, 80.5%, and BLEU 5.5 for those three subtasks, respectively.
Anthology ID:
2020.semeval-1.78
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venues:
COLING | SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
614–619
Language:
URL:
https://aclanthology.org/2020.semeval-1.78
DOI:
10.18653/v1/2020.semeval-1.78
Bibkey:
Cite (ACL):
Kerenza Doxolodeo and Rahmad Mahendra. 2020. UI at SemEval-2020 Task 4: Commonsense Validation and Explanation by Exploiting Contradiction. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 614–619, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
UI at SemEval-2020 Task 4: Commonsense Validation and Explanation by Exploiting Contradiction (Doxolodeo & Mahendra, SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.semeval-1.78.pdf
Data
ConceptNetMultiNLISNLI