Abstract
We present a deep neural network that leverages images to improve bilingual text embeddings. Relying on bilingual image tags and descriptions, our approach conditions text embedding induction on the shared visual information for both languages, producing highly correlated bilingual embeddings. In particular, we propose a novel model based on Partial Canonical Correlation Analysis (PCCA). While the original PCCA finds linear projections of two views in order to maximize their canonical correlation conditioned on a shared third variable, we introduce a non-linear Deep PCCA (DPCCA) model, and develop a new stochastic iterative algorithm for its optimization. We evaluate PCCA and DPCCA on multilingual word similarity and cross-lingual image description retrieval. Our models outperform a large variety of previous methods, despite not having access to any visual signal during test time inference.- Anthology ID:
- P18-1084
- Volume:
- Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Iryna Gurevych, Yusuke Miyao
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 910–921
- Language:
- URL:
- https://aclanthology.org/P18-1084
- DOI:
- 10.18653/v1/P18-1084
- Cite (ACL):
- Guy Rotman, Ivan Vulić, and Roi Reichart. 2018. Bridging Languages through Images with Deep Partial Canonical Correlation Analysis. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 910–921, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Bridging Languages through Images with Deep Partial Canonical Correlation Analysis (Rotman et al., ACL 2018)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/P18-1084.pdf
- Code
- rotmanguy/DPCCA
- Data
- Visual Question Answering