Abstract
To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain. We evaluate these models on 6 disentanglement metrics, as well as on downstream classification tasks and homotopy. To facilitate the evaluation, we propose two synthetic datasets with known generative factors. Our experiments highlight the existing gap in the text domain and illustrate that certain elements such as representation sparsity (as an inductive bias), or representation coupling with the decoder could impact disentanglement. To the best of our knowledge, our work is the first attempt on the intersection of unsupervised representation disentanglement and text, and provides the experimental framework and datasets for examining future developments in this direction.- Anthology ID:
- 2021.repl4nlp-1.14
- Volume:
- Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz
- Venue:
- RepL4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 128–140
- Language:
- URL:
- https://aclanthology.org/2021.repl4nlp-1.14
- DOI:
- 10.18653/v1/2021.repl4nlp-1.14
- Cite (ACL):
- Lan Zhang, Victor Prokhorov, and Ehsan Shareghi. 2021. Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic Datasets. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 128–140, Online. Association for Computational Linguistics.
- Cite (Informal):
- Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic Datasets (Zhang et al., RepL4NLP 2021)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2021.repl4nlp-1.14.pdf
- Code
- lanzhang128/disentanglement