@inproceedings{peng-etal-2021-cross,
    title = "Cross-Lingual Word Embedding Refinement by $\ell_{1}$ Norm Optimisation",
    author = "Peng, Xutan  and
      Lin, Chenghua  and
      Stevenson, Mark",
    editor = "Toutanova, Kristina  and
      Rumshisky, Anna  and
      Zettlemoyer, Luke  and
      Hakkani-Tur, Dilek  and
      Beltagy, Iz  and
      Bethard, Steven  and
      Cotterell, Ryan  and
      Chakraborty, Tanmoy  and
      Zhou, Yichao",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.naacl-main.214/",
    doi = "10.18653/v1/2021.naacl-main.214",
    pages = "2690--2701",
    abstract = "Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located. Existing methods for building high-quality CLWEs learn mappings that minimise the {\ensuremath{\ell}}2 norm loss function. However, this optimisation objective has been demonstrated to be sensitive to outliers. Based on the more robust Manhattan norm (aka. {\ensuremath{\ell}}1 norm) goodness-of-fit criterion, this paper proposes a simple post-processing step to improve CLWEs. An advantage of this approach is that it is fully agnostic to the training process of the original CLWEs and can therefore be applied widely. Extensive experiments are performed involving ten diverse languages and embeddings trained on different corpora. Evaluation results based on bilingual lexicon induction and cross-lingual transfer for natural language inference tasks show that the {\ensuremath{\ell}}1 refinement substantially outperforms four state-of-the-art baselines in both supervised and unsupervised settings. It is therefore recommended that this strategy be adopted as a standard for CLWE methods."
}Markdown (Informal)
[Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation](https://preview.aclanthology.org/ingest-emnlp/2021.naacl-main.214/) (Peng et al., NAACL 2021)
ACL
- Xutan Peng, Chenghua Lin, and Mark Stevenson. 2021. Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2690–2701, Online. Association for Computational Linguistics.