Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by Evidence Pooling

Sandeep Attree


Abstract
This paper presents a strong set of results for resolving gendered ambiguous pronouns on the Gendered Ambiguous Pronouns shared task. The model presented here draws upon the strengths of state-of-the-art language and coreference resolution models, and introduces a novel evidence-based deep learning architecture. Injecting evidence from the coreference models compliments the base architecture, and analysis shows that the model is not hindered by their weaknesses, specifically gender bias. The modularity and simplicity of the architecture make it very easy to extend for further improvement and applicable to other NLP problems. Evaluation on GAP test data results in a state-of-the-art performance at 92.5% F1 (gender bias of 0.97), edging closer to the human performance of 96.6%. The end-to-end solution presented here placed 1st in the Kaggle competition, winning by a significant lead.
Anthology ID:
W19-3820
Volume:
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Marta R. Costa-jussà, Christian Hardmeier, Will Radford, Kellie Webster
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
134–146
Language:
URL:
https://aclanthology.org/W19-3820
DOI:
10.18653/v1/W19-3820
Bibkey:
Cite (ACL):
Sandeep Attree. 2019. Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by Evidence Pooling. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 134–146, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by Evidence Pooling (Attree, GeBNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/W19-3820.pdf
Code
 sattree/gap
Data
GAPGAP Coreference Dataset