Abstract
We tackle the task of building supervised event trigger identification models which can generalize better across domains. Our work leverages the adversarial domain adaptation (ADA) framework to introduce domain-invariance. ADA uses adversarial training to construct representations that are predictive for trigger identification, but not predictive of the example’s domain. It requires no labeled data from the target domain, making it completely unsupervised. Experiments with two domains (English literature and news) show that ADA leads to an average F1 score improvement of 3.9 on out-of-domain data. Our best performing model (BERT-A) reaches 44-49 F1 across both domains, using no labeled target data. Preliminary experiments reveal that finetuning on 1% labeled data, followed by self-training leads to substantial improvement, reaching 51.5 and 67.2 F1 on literature and news respectively.- Anthology ID:
- 2020.acl-main.681
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7618–7624
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2020.acl-main.681/
- DOI:
- 10.18653/v1/2020.acl-main.681
- Cite (ACL):
- Aakanksha Naik and Carolyn Rose. 2020. Towards Open Domain Event Trigger Identification using Adversarial Domain Adaptation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7618–7624, Online. Association for Computational Linguistics.
- Cite (Informal):
- Towards Open Domain Event Trigger Identification using Adversarial Domain Adaptation (Naik & Rose, ACL 2020)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2020.acl-main.681.pdf
- Code
- aakanksha19/ODETTE