Xin Li


2022

pdf
MELM: Data Augmentation with Masked Entity Language Modeling for Low-Resource NER
Ran Zhou | Xin Li | Ruidan He | Lidong Bing | Erik Cambria | Luo Si | Chunyan Miao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Data augmentation is an effective solution to data scarcity in low-resource scenarios. However, when applied to token-level tasks such as NER, data augmentation methods often suffer from token-label misalignment, which leads to unsatsifactory performance. In this work, we propose Masked Entity Language Modeling (MELM) as a novel data augmentation framework for low-resource NER. To alleviate the token-label misalignment issue, we explicitly inject NER labels into sentence context, and thus the fine-tuned MELM is able to predict masked entity tokens by explicitly conditioning on their labels. Thereby, MELM generates high-quality augmented data with novel entities, which provides rich entity regularity knowledge and boosts NER performance. When training data from multiple languages are available, we also integrate MELM with code-mixing for further improvement. We demonstrate the effectiveness of MELM on monolingual, cross-lingual and multilingual NER across various low-resource levels. Experimental results show that our MELM consistently outperforms the baseline methods.

2021

pdf
Multilingual AMR Parsing with Noisy Knowledge Distillation
Deng Cai | Xin Li | Jackie Chun-Sing Ho | Lidong Bing | Wai Lam
Findings of the Association for Computational Linguistics: EMNLP 2021

We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingual setting: there is but one model to parse all different languages including English. We identify that noisy input and precise output are the key to successful distillation. Together with extensive pre-training, we obtain an AMR parser whose performances surpass all previously published results on four different foreign languages, including German, Spanish, Italian, and Chinese, by large margins (up to 18.8 Smatch points on Chinese and on average 11.3 Smatch points). Our parser also achieves comparable performance on English to the latest state-of-the-art English-only parser.

pdf
Aspect-based Sentiment Analysis in Question Answering Forums
Wenxuan Zhang | Yang Deng | Xin Li | Lidong Bing | Wai Lam
Findings of the Association for Computational Linguistics: EMNLP 2021

Aspect-based sentiment analysis (ABSA) typically focuses on extracting aspects and predicting their sentiments on individual sentences such as customer reviews. Recently, another kind of opinion sharing platform, namely question answering (QA) forum, has received increasing popularity, which accumulates a large number of user opinions towards various aspects. This motivates us to investigate the task of ABSA on QA forums (ABSA-QA), aiming to jointly detect the discussed aspects and their sentiment polarities for a given QA pair. Unlike review sentences, a QA pair is composed of two parallel sentences, which requires interaction modeling to align the aspect mentioned in the question and the associated opinion clues in the answer. To this end, we propose a model with a specific design of cross-sentence aspect-opinion interaction modeling to address this task. The proposed method is evaluated on three real-world datasets and the results show that our model outperforms several strong baselines adopted from related state-of-the-art models.

pdf
Limitations of Autoregressive Models and Their Alternatives
Chu-Cheng Lin | Aaron Jaech | Xin Li | Matthew R. Gormley | Jason Eisner
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Standard autoregressive language models perform only polynomial-time computation to compute the probability of the next symbol. While this is attractive, it means they cannot model distributions whose next-symbol probability is hard to compute. Indeed, they cannot even model them well enough to solve associated easy decision problems for which an engineer might want to consult a language model. These limitations apply no matter how much computation and data are used to train the model, unless the model is given access to oracle parameters that grow superpolynomially in sequence length. Thus, simply training larger autoregressive language models is not a panacea for NLP. Alternatives include energy-based models (which give up efficient sampling) and latent-variable autoregressive models (which give up efficient scoring of a given string). Both are powerful enough to escape the above limitations.

pdf
Towards Generative Aspect-Based Sentiment Analysis
Wenxuan Zhang | Xin Li | Yang Deng | Lidong Bing | Wai Lam
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Aspect-based sentiment analysis (ABSA) has received increasing attention recently. Most existing work tackles ABSA in a discriminative manner, designing various task-specific classification networks for the prediction. Despite their effectiveness, these methods ignore the rich label semantics in ABSA problems and require extensive task-specific designs. In this paper, we propose to tackle various ABSA tasks in a unified generative framework. Two types of paradigms, namely annotation-style and extraction-style modeling, are designed to enable the training process by formulating each ABSA task as a text generation problem. We conduct experiments on four ABSA tasks across multiple benchmark datasets where our proposed generative approach achieves new state-of-the-art results in almost all cases. This also validates the strong generality of the proposed framework which can be easily adapted to arbitrary ABSA task without additional task-specific model design.

pdf
Aspect Sentiment Quad Prediction as Paraphrase Generation
Wenxuan Zhang | Yang Deng | Xin Li | Yifei Yuan | Lidong Bing | Wai Lam
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Aspect-based sentiment analysis (ABSA) has been extensively studied in recent years, which typically involves four fundamental sentiment elements, including the aspect category, aspect term, opinion term, and sentiment polarity. Existing studies usually consider the detection of partial sentiment elements, instead of predicting the four elements in one shot. In this work, we introduce the Aspect Sentiment Quad Prediction (ASQP) task, aiming to jointly detect all sentiment elements in quads for a given opinionated sentence, which can reveal a more comprehensive and complete aspect-level sentiment structure. We further propose a novel Paraphrase modeling paradigm to cast the ASQP task to a paraphrase generation process. On one hand, the generation formulation allows solving ASQP in an end-to-end manner, alleviating the potential error propagation in the pipeline solution. On the other hand, the semantics of the sentiment elements can be fully exploited by learning to generate them in the natural language form. Extensive experiments on benchmark datasets show the superiority of our proposed method and the capacity of cross-task transfer with the proposed unified Paraphrase modeling framework.

2020

pdf
A Chinese Corpus for Fine-grained Entity Typing
Chin Lee | Hongliang Dai | Yangqiu Song | Xin Li
Proceedings of the Twelfth Language Resources and Evaluation Conference

Fine-grained entity typing is a challenging task with wide applications. However, most existing datasets for this task are in English. In this paper, we introduce a corpus for Chinese fine-grained entity typing that contains 4,800 mentions manually labeled through crowdsourcing. Each mention is annotated with free-form entity types. To make our dataset useful in more possible scenarios, we also categorize all the fine-grained types into 10 general types. Finally, we conduct experiments with some neural models whose structures are typical in fine-grained entity typing and show how well they perform on our dataset. We also show the possibility of improving Chinese fine-grained entity typing through cross-lingual transfer learning.

2019

pdf
Small and Practical BERT Models for Sequence Labeling
Henry Tsai | Jason Riesa | Melvin Johnson | Naveen Arivazhagan | Xin Li | Amelia Archer
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose a practical scheme to train a single multilingual sequence labeling model that yields state of the art results and is small and fast enough to run on a single CPU. Starting from a public multilingual BERT checkpoint, our final model is 6x smaller and 27x faster, and has higher accuracy than a state-of-the-art multilingual baseline. We show that our model especially outperforms on low-resource languages, and works on codemixed input text without being explicitly trained on codemixed examples. We showcase the effectiveness of our method by reporting on part-of-speech tagging and morphological prediction on 70 treebanks and 48 languages.

pdf
Transferable End-to-End Aspect-based Sentiment Analysis with Selective Adversarial Learning
Zheng Li | Xin Li | Ying Wei | Lidong Bing | Yu Zhang | Qiang Yang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Joint extraction of aspects and sentiments can be effectively formulated as a sequence labeling problem. However, such formulation hinders the effectiveness of supervised methods due to the lack of annotated sequence data in many domains. To address this issue, we firstly explore an unsupervised domain adaptation setting for this task. Prior work can only use common syntactic relations between aspect and opinion words to bridge the domain gaps, which highly relies on external linguistic resources. To resolve it, we propose a novel Selective Adversarial Learning (SAL) method to align the inferred correlation vectors that automatically capture their latent relations. The SAL method can dynamically learn an alignment weight for each word such that more important words can possess higher alignment weights to achieve fine-grained (word-level) adaptation. Empirically, extensive experiments demonstrate the effectiveness of the proposed SAL method.

pdf
Improving Fine-grained Entity Typing with Entity Linking
Hongliang Dai | Donghong Du | Xin Li | Yangqiu Song
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5% absolute strict accuracy improvement over the state of the art.

pdf
Exploiting BERT for End-to-End Aspect-based Sentiment Analysis
Xin Li | Lidong Bing | Wenxuan Zhang | Wai Lam
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e.g. BERT, on the E2E-ABSA task. Specifically, we build a series of simple yet insightful neural baselines to deal with E2E-ABSA. The experimental results show that even with a simple linear classification layer, our BERT-based architecture can outperform state-of-the-art works. Besides, we also standardize the comparative study by consistently utilizing a hold-out validation dataset for model selection, which is largely ignored by previous works. Therefore, our work can serve as a BERT-based benchmark for E2E-ABSA.

2018

pdf
Transformation Networks for Target-Oriented Sentiment Classification
Xin Li | Lidong Bing | Wai Lam | Bei Shi
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model that achieves new state-of-the-art results on a few benchmarks. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component which first generates target-specific representations of words in the sentence, and then incorporates a mechanism for preserving the original contextual information from the RNN layer.

2017

pdf
Deep Multi-Task Learning for Aspect Term Extraction with Memory Interaction
Xin Li | Wai Lam
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework.

2015

pdf
Topic Model for Identifying Suicidal Ideation in Chinese Microblog
Xiaolei Huang | Xin Li | Tianli Liu | David Chiu | Tingshao Zhu | Lei Zhang
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation

2009

pdf
Empirical Exploitation of Click Data for Task Specific Ranking
Anlei Dong | Yi Chang | Shihao Ji | Ciya Liao | Xin Li | Zhaohui Zheng
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

2006

pdf
NetEase Automatic Chinese Word Segmentation
Xin Li | Shuaixiang Dai
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing

2005

pdf
Question Classification using Multiple Classifiers
Xin Li | Xuan-Jing Huang | Li-de Wu
Proceedings of the Fifth Workshop on Asian Language Resources (ALR-05) and First Symposium on Asian Language Resources Network (ALRN)

pdf
Discriminative Training of Clustering Functions: Theory and Experiments with Entity Identification
Xin Li | Dan Roth
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

2004

pdf
Robust Reading: Identification and Tracing of Ambiguous Names
Xin Li | Paul Morie | Dan Roth
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf
Using Gene Expression Programming to Construct Sentence Ranking Functions for Text Summarization
Zhuli Xie | Xin Li | Barbara Di Eugenio | Weimin Xiao | Thomas M. Tirpak | Peter C. Nelson
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf
Phrasenet: towards context sensitive lexical semantics
Xin Li | Dan Roth | Yuancheng Tu
Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003

2002

pdf
Learning Question Classifiers
Xin Li | Dan Roth
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf
Exploring evidence for shallow parsing
Xin Li | Dan Roth
Proceedings of the ACL 2001 Workshop on Computational Natural Language Learning (ConLL)