Learning Disentangled Representations of Negation and Uncertainty

Jake Vasilakes, Chrysoula Zerva, Makoto Miwa, Sophia Ananiadou


Abstract
Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.
Anthology ID:
2022.acl-long.574
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8380–8397
Language:
URL:
https://aclanthology.org/2022.acl-long.574
DOI:
10.18653/v1/2022.acl-long.574
Bibkey:
Cite (ACL):
Jake Vasilakes, Chrysoula Zerva, Makoto Miwa, and Sophia Ananiadou. 2022. Learning Disentangled Representations of Negation and Uncertainty. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8380–8397, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Learning Disentangled Representations of Negation and Uncertainty (Vasilakes et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.acl-long.574.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2022.acl-long.574.mp4
Code
 jvasilakes/disentanglement-vae