Semi-supervised Stochastic Multi-Domain Learning using Variational Inference

Yitong Li, Timothy Baldwin, Trevor Cohn


Abstract
Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly heterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning.
Anthology ID:
P19-1186
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1923–1934
Language:
URL:
https://aclanthology.org/P19-1186
DOI:
10.18653/v1/P19-1186
Bibkey:
Cite (ACL):
Yitong Li, Timothy Baldwin, and Trevor Cohn. 2019. Semi-supervised Stochastic Multi-Domain Learning using Variational Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1923–1934, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Semi-supervised Stochastic Multi-Domain Learning using Variational Inference (Li et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/P19-1186.pdf
Video:
 https://vimeo.com/384532338
Data
Multi-Domain Sentiment Dataset v2.0