Domain Attention with an Ensemble of Experts

Young-Bum Kim, Karl Stratos, Dongchan Kim


Abstract
An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daumé III (2009). However, it is desirable to adapt without having to re-estimate a global model from scratch each time a new domain with potentially new intents and slots is added. We describe a solution based on attending an ensemble of domain experts. We assume K domain specific intent and slot models trained on respective domains. When given domain K + 1, our model uses a weighted combination of the K domain experts’ feedback along with its own opinion to make predictions on the new domain. In experiments, the model significantly outperforms baselines that do not use domain adaptation and also performs better than the full retraining approach.
Anthology ID:
P17-1060
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
643–653
Language:
URL:
https://aclanthology.org/P17-1060
DOI:
10.18653/v1/P17-1060
Bibkey:
Cite (ACL):
Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Domain Attention with an Ensemble of Experts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643–653, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Domain Attention with an Ensemble of Experts (Kim et al., ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/P17-1060.pdf
Video:
 https://vimeo.com/234957265