Estelle Labidurie


2021

pdf bib
A sequence to sequence transformer data logic experiment
Danxin Cui | Dominique Mariko | Estelle Labidurie | Hugues de Mazancourt | Patrick Paroubek
Proceedings of the 3rd Financial Narrative Processing Workshop

pdf bib
The Financial Document Causality Detection Shared Task (FinCausal 2021)
Dominique Mariko | Hanna Abi Akl | Estelle Labidurie | Stephane Durfort | Hugues de Mazancourt | Mahmoud El-Haj
Proceedings of the 3rd Financial Narrative Processing Workshop

2020

pdf bib
The Financial Document Causality Detection Shared Task (FinCausal 2020)
Dominique Mariko | Hanna Abi-Akl | Estelle Labidurie | Stephane Durfort | Hugues De Mazancourt | Mahmoud El-Haj
Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation

We present the FinCausal 2020 Shared Task on Causality Detection in Financial Documents and the associated FinCausal dataset, and discuss the participating systems and results. Two sub-tasks are proposed: a binary classification task (Task 1) and a relation extraction task (Task 2). A total of 16 teams submitted runs across the two Tasks and 13 of them contributed with a system description paper. This workshop is associated to the Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020), held at The 28th International Conference on Computational Linguistics (COLING’2020), Barcelona, Spain on September 12, 2020.

pdf bib
Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for Counterfactual Statement Analysis
Hanna Abi-Akl | Dominique Mariko | Estelle Labidurie
Proceedings of the Fourteenth Workshop on Semantic Evaluation

In this paper, we explore strategies to detect and evaluate counterfactual sentences. We describe our system for SemEval-2020 Task 5: Modeling Causal Reasoning in Language: Detecting Counterfactuals. We use a BERT base model for the classification task and build a hybrid BERT Multi-Layer Perceptron system to handle the sequence identification task. Our experiments show that while introducing syntactic and semantic features does little in improving the system in the classification task, using these types of features as cascaded linear inputs to fine-tune the sequence-delimiting ability of the model ensures it outperforms other similar-purpose complex systems like BiLSTM-CRF in the second task. Our system achieves an F1 score of 85.00% in Task 1 and 83.90% in Task 2.