Abstract
Most real world language problems require learning from heterogenous corpora, raising the problem of learning robust models which generalise well to both similar (in domain) and dissimilar (out of domain) instances to those seen in training. This requires learning an underlying task, while not learning irrelevant signals and biases specific to individual domains. We propose a novel method to optimise both in- and out-of-domain accuracy based on joint learning of a structured neural model with domain-specific and domain-general components, coupled with adversarial training for domain. Evaluating on multi-domain language identification and multi-domain sentiment analysis, we show substantial improvements over standard domain adaptation techniques, and domain-adversarial training.- Anthology ID:
- N18-2076
- Volume:
- Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Marilyn Walker, Heng Ji, Amanda Stent
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 474–479
- Language:
- URL:
- https://aclanthology.org/N18-2076
- DOI:
- 10.18653/v1/N18-2076
- Cite (ACL):
- Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. What’s in a Domain? Learning Domain-Robust Text Representations using Adversarial Training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 474–479, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- What’s in a Domain? Learning Domain-Robust Text Representations using Adversarial Training (Li et al., NAACL 2018)
- PDF:
- https://preview.aclanthology.org/fix-dup-bibkey/N18-2076.pdf
- Code
- lrank/Domain_Robust_Text_Representation
- Data
- Multi-Domain Sentiment