Shaogang Ren
2021
A Deep Decomposable Model for Disentangling Syntax and Semantics in Sentence Representation
Dingcheng Li
|
Hongliang Fei
|
Shaogang Ren
|
Ping Li
Findings of the Association for Computational Linguistics: EMNLP 2021
Recently, disentanglement based on a generative adversarial network or a variational autoencoder has significantly advanced the performance of diverse applications in CV and NLP domains. Nevertheless, those models still work on coarse levels in the disentanglement of closely related properties, such as syntax and semantics in human languages. This paper introduces a deep decomposable model based on VAE to disentangle syntax and semantics by using total correlation penalties on KL divergences. Notably, we decompose the KL divergence term of the original VAE so that the generated latent variables can be separated in a more clear-cut and interpretable way. Experiments on benchmark datasets show that our proposed model can significantly improve the disentanglement quality between syntactic and semantic representations for semantic similarity tasks and syntactic similarity tasks.
Search