Xinnian Mao

Also published as: Xin Mao


2022

pdf
An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism
Xin Mao | Meirong Ma | Hao Yuan | Jianchao Zhu | ZongYu Wang | Rui Xie | Wei Wu | Man Lan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Entity alignment (EA) aims to discover the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs.For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI).Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA.Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.

pdf
LightEA: A Scalable, Robust, and Interpretable Entity Alignment Framework via Three-view Label Propagation
Xin Mao | Wenting Wang | Yuanbin Wu | Man Lan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Entity Alignment (EA) aims to find equivalent entity pairs between KGs, which is the core step to bridging and integrating multi-source KGs. In this paper, we argue that existing complex EA methods inevitably inherit the inborn defects from their neural network lineage: poor interpretability and weak scalability. Inspired by recent studies, we reinvent the classical Label Propagation algorithm to effectively run on KGs and propose a neural-free EA framework — LightEA, consisting of three efficient components: (i) Random Orthogonal Label Generation, (ii) Three-view Label Propagation, and (iii) Sparse Sinkhorn Operation.According to the extensive experiments on public datasets, LightEA has impressive scalability, robustness, and interpretability. With a mere tenth of time consumption, LightEA achieves comparable results to state-of-the-art methods across all datasets and even surpasses them on many. Besides, due to the computational process of LightEA being entirely linear, we could trace the propagation process at each step and clearly explain how the entities are aligned.

pdf
A Simple Temporal Information Matching Mechanism for Entity Alignment between Temporal Knowledge Graphs
Li Cai | Xin Mao | Meirong Ma | Hao Yuan | Jianchao Zhu | Man Lan
Proceedings of the 29th International Conference on Computational Linguistics

Entity alignment (EA) aims to find entities in different knowledge graphs (KGs) that refer to the same object in the real world. Recent studies incorporate temporal information to augment the representations of KGs. The existing methods for EA between temporal KGs (TKGs) utilize a time-aware attention mechanisms to incorporate relational and temporal information into entity embeddings. The approaches outperform the previous methods by using temporal information. However, we believe that it is not necessary to learn the embeddings of temporal information in KGs since most TKGs have uniform temporal representations. Therefore, we propose a simple GNN model combined with a temporal information matching mechanism, which achieves better performance with less time and fewer parameters. Furthermore, since alignment seeds are difficult to label in real-world applications, we also propose a method to generate unsupervised alignment seeds via the temporal information of TKG. Extensive experiments on public datasets indicate that our supervised method significantly outperforms the previous methods and the unsupervised one has competitive performance.

2021

pdf
From Alignment to Assignment: Frustratingly Simple Unsupervised Entity Alignment
Xin Mao | Wenting Wang | Yuanbin Wu | Man Lan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Cross-lingual entity alignment (EA) aims to find the equivalent entities between crosslingual KGs (Knowledge Graphs), which is a crucial step for integrating KGs. Recently, many GNN-based EA methods are proposed and show decent performance improvements on several public datasets. However, existing GNN-based EA methods inevitably inherit poor interpretability and low efficiency from neural networks. Motivated by the isomorphic assumption of GNN-based methods, we successfully transform the cross-lingual EA problem into an assignment problem. Based on this re-definition, we propose a frustratingly Simple but Effective Unsupervised entity alignment method (SEU) without neural networks. Extensive experiments have been conducted to show that our proposed unsupervised approach even beats advanced supervised methods across all public datasets while having high efficiency, interpretability, and stability.

2019

pdf
Scaling up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title
Huimin Xu | Wenting Wang | Xin Mao | Xinyu Jiang | Man Lan
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Supplementing product information by extracting attribute values from title is a crucial task in e-Commerce domain. Previous studies treat each attribute only as an entity type and build one set of NER tags (e.g., BIO) for each of them, leading to a scalability issue which unfits to the large sized attribute system in real world e-Commerce. In this work, we propose a novel approach to support value extraction scaling up to thousands of attributes without losing performance: (1) We propose to regard attribute as a query and adopt only one global set of BIO tags for any attributes to reduce the burden of attribute tag or model explosion; (2) We explicitly model the semantic representations for attribute and title, and develop an attention mechanism to capture the interactive semantic relations in-between to enforce our framework to be attribute comprehensive. We conduct extensive experiments in real-life datasets. The results show that our model not only outperforms existing state-of-the-art NER tagging models, but also is robust and generates promising results for up to 8,906 attributes.

2018

pdf
ECNU at SemEval-2018 Task 2: Leverage Traditional NLP Features and Neural Networks Methods to Address Twitter Emoji Prediction Task
Xingwu Lu | Xin Mao | Man Lan | Yuanbin Wu
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes our submissions to Task 2 in SemEval 2018, i.e., Multilingual Emoji Prediction. We first investigate several traditional Natural Language Processing (NLP) features, and then design several deep learning models. For subtask 1: Emoji Prediction in English, we combine two different methods to represent tweet, i.e., supervised model using traditional features and deep learning model. For subtask 2: Emoji Prediction in Spanish, we only use deep learning model.

2008

pdf
Chinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields
Xinnian Mao | Yuan Dong | Saike He | Sencheng Bao | Haila Wang
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing

2007

pdf
Using Non-Local Features to Improve Named Entity Recognition Recall
Xinnian Mao | Wei Xu | Yuan Dong | Saike He | Haila Wang
Proceedings of the 21st Pacific Asia Conference on Language, Information and Computation

2005

pdf
Chinese Word Segmentation in FTRD Beijing
Heng Li | Yuan Dong | Xinnian Mao | Haila Wang | Wu Liu
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing