Improving word mover’s distance by leveraging self-attention matrix

Hiroaki Yamagiwa, Sho Yokoi, Hidetoshi Shimodaira


Abstract
Measuring the semantic similarity between two sentences is still an important task. The word mover’s distance (WMD) computes the similarity via the optimal alignment between the sets of word embeddings. However, WMD does not utilize word order, making it challenging to distinguish sentences with significant overlaps of similar words, even if they are semantically very different. Here, we attempt to improve WMD by incorporating the sentence structure represented by BERT’s self-attention matrix (SAM). The proposed method is based on the Fused Gromov-Wasserstein distance, which simultaneously considers the similarity of the word embedding and the SAM for calculating the optimal transport between two sentences. Experiments demonstrate the proposed method enhances WMD and its variants in paraphrase identification with near-equivalent performance in semantic textual similarity.
Anthology ID:
2023.findings-emnlp.746
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11160–11183
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.746
DOI:
10.18653/v1/2023.findings-emnlp.746
Bibkey:
Cite (ACL):
Hiroaki Yamagiwa, Sho Yokoi, and Hidetoshi Shimodaira. 2023. Improving word mover’s distance by leveraging self-attention matrix. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11160–11183, Singapore. Association for Computational Linguistics.
Cite (Informal):
Improving word mover’s distance by leveraging self-attention matrix (Yamagiwa et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.746.pdf