BUT-FIT at SemEval-2020 Task 5: Automatic Detection of Counterfactual Statements with Deep Pre-trained Language Representation Models

Martin Fajcik, Josef Jon, Martin Docekal, Pavel Smrz


Abstract
This paper describes BUT-FIT’s submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused on detecting whether a given statement contains a counterfactual (Subtask 1) and extracting both antecedent and consequent parts of the counterfactual from the text (Subtask 2). We experimented with various state-of-the-art language representation models (LRMs). We found RoBERTa LRM to perform the best in both subtasks. We achieved the first place in both exact match and F1 for Subtask 2 and ranked second for Subtask 1.
Anthology ID:
2020.semeval-1.53
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venues:
COLING | SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
437–444
Language:
URL:
https://aclanthology.org/2020.semeval-1.53
DOI:
10.18653/v1/2020.semeval-1.53
Bibkey:
Cite (ACL):
Martin Fajcik, Josef Jon, Martin Docekal, and Pavel Smrz. 2020. BUT-FIT at SemEval-2020 Task 5: Automatic Detection of Counterfactual Statements with Deep Pre-trained Language Representation Models. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 437–444, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
BUT-FIT at SemEval-2020 Task 5: Automatic Detection of Counterfactual Statements with Deep Pre-trained Language Representation Models (Fajcik et al., SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.semeval-1.53.pdf
Code
 MFajcik/SemEval_2020_Task-5