Mingxiao Guo


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
SGMEA: Structure-Guided Multimodal Entity Alignment
Jingwei Cheng | Mingxiao Guo | Fu Zhang
Proceedings of the 31st International Conference on Computational Linguistics

Multimodal Entity Alignment (MMEA) aims to identify equivalent entities across different multimodal knowledge graphs (MMKGs) by integrating structural information, entity attributes, and visual data, thereby promoting knowledge sharing and deep multimodal data integration. However, existing methods often overlook the deeper connections between multimodal data. They primarily focus on the interactions between neighboring entities in the structural modality while neglecting the interactions between entities in the visual and attribute modalities. To address this, we propose a structure-guided multimodal entity alignment method (SGMEA), which prioritizes structural information from knowledge graphs to enhance the visual and attribute modalities. By fusing multimodal representations, SGMEA improves the accuracy of entity alignment. Experimental results demonstrate that SGMEA achieves stateof-the-art performance across multiple datasets, validating its effectiveness and superiority in practical applications.