Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for Counterfactual Statement Analysis

Hanna Abi-Akl, Dominique Mariko, Estelle Labidurie


Abstract
In this paper, we explore strategies to detect and evaluate counterfactual sentences. We describe our system for SemEval-2020 Task 5: Modeling Causal Reasoning in Language: Detecting Counterfactuals. We use a BERT base model for the classification task and build a hybrid BERT Multi-Layer Perceptron system to handle the sequence identification task. Our experiments show that while introducing syntactic and semantic features does little in improving the system in the classification task, using these types of features as cascaded linear inputs to fine-tune the sequence-delimiting ability of the model ensures it outperforms other similar-purpose complex systems like BiLSTM-CRF in the second task. Our system achieves an F1 score of 85.00% in Task 1 and 83.90% in Task 2.
Anthology ID:
2020.semeval-1.57
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venues:
COLING | SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
468–478
Language:
URL:
https://aclanthology.org/2020.semeval-1.57
DOI:
10.18653/v1/2020.semeval-1.57
Bibkey:
Cite (ACL):
Hanna Abi-Akl, Dominique Mariko, and Estelle Labidurie. 2020. Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for Counterfactual Statement Analysis. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 468–478, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for Counterfactual Statement Analysis (Abi-Akl et al., SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.semeval-1.57.pdf