Yi Liu


2021

pdf bib
A Corpus-based Lexical Semantic Study of Mandarin Verbs of zhidao and liaojie
Yi Liu
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

pdf bib
A Corpus-based Lexical Semantic Study of Mandarin Verbs of zhidao and liaojie
Yi Liu
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

pdf bib
BioCopy: A Plug-And-Play Span Copy Mechanism in Seq2Seq Models
Yi Liu | Guoan Zhang | Puning Yu | Jianlin Su | Shengfeng Pan
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing

Copy mechanisms explicitly obtain unchanged tokens from the source (input) sequence to generate the target (output) sequence under the neural seq2seq framework. However, most of the existing copy mechanisms only consider single word copying from the source sentences, which results in losing essential tokens while copying long spans. In this work, we propose a plug-and-play architecture, namely BioCopy, to alleviate the problem aforementioned. Specifically, in the training stage, we construct a BIO tag for each token and train the original model with BIO tags jointly. In the inference stage, the model will firstly predict the BIO tag at each time step, then conduct different mask strategies based on the predicted BIO label to diminish the scope of the probability distributions over the vocabulary list. Experimental results on two separate generative tasks show that they all outperform the baseline models by adding our BioCopy to the original model structure.

2018

pdf bib
Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network
Xiangyang Zhou | Lu Li | Daxiang Dong | Yi Liu | Ying Chen | Wayne Xin Zhao | Dianhai Yu | Hua Wu
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context. In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017) and we extend the attention mechanism in two ways. First, we construct representations of text segments at different granularities solely with stacked self-attention. Second, we try to extract the truly matched segment pairs with attention across the context and response. We jointly introduce those two kinds of attention in one uniform neural network. Experiments on two large-scale multi-turn response selection tasks show that our proposed model significantly outperforms the state-of-the-art models.

pdf bib
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Zeyang Lei | Yujiu Yang | Min Yang | Yi Liu
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation sub-spaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.

pdf bib
Multi-glance Reading Model for Text Understanding
Pengcheng Zhu | Yujiu Yang | Wenqiang Gao | Yi Liu
Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing

In recent years, a variety of recurrent neural networks have been proposed, e.g LSTM. However, existing models only read the text once, it cannot describe the situation of repeated reading in reading comprehension. In fact, when reading or analyzing a text, we may read the text several times rather than once if we couldn’t well understand it. So, how to model this kind of the reading behavior? To address the issue, we propose a multi-glance mechanism (MGM) for modeling the habit of reading behavior. In the proposed framework, the actual reading process can be fully simulated, and then the obtained information can be consistent with the task. Based on the multi-glance mechanism, we design two types of recurrent neural network models for repeated reading: Glance Cell Model (GCM) and Glance Gate Model (GGM). Visualization analysis of the GCM and the GGM demonstrates the effectiveness of multi-glance mechanisms. Experiments results on the large-scale datasets show that the proposed methods can achieve better performance.

pdf bib
Factors Affecting Accent of New and Similar Vowels in Hong Kong Cantonese Pronounced by Urdu Speakers from Secondary School
Yi Liu | Jinghong Ning
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation

2015

pdf bib
Clustering Sentences with Density Peaks for Multi-document Summarization
Yang Zhang | Yunqing Xia | Yi Liu | Wenmin Wang
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
A Very Large Scale Mandarin Chinese Broadcast Corpus for GALE Project
Yi Liu | Pascale Fung | Yongsheng Yang | Denise DiPersio | Meghan Glenn | Stephanie Strassel | Christopher Cieri
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this paper, we present the design, collection, transcription and analysis of a Mandarin Chinese Broadcast Collection of over 3000 hours. The data was collected by Hong Kong University of Science and Technology (HKUST) in China on a cable TV and satellite transmission platform established in support of the DARPA Global Autonomous Language Exploitation (GALE) program. The collection includes broadcast news (BN) and broadcast conversation (BC) including talk shows, roundtable discussions, call-in shows, editorials and other conversational programs that focus on news and current events. HKUST also collects detailed information about all recorded programs. A subset of BC and BN recordings are manually transcribed with standard Chinese characters in UTF-8 encoding, using specific mark-ups for a small set of spontaneous and conversational speech phenomena. The collection is among the largest and first of its kind for Mandarin Chinese Broadcast speech, providing abundant and diverse samples for Mandarin speech recognition and other application-dependent tasks, such as spontaneous speech processing and recognition, topic detection, information retrieval, and speaker recognition. HKUST’s acoustic analysis of 500 hours of the speech and transcripts demonstrates the positive impact this data could have on system performance.

pdf bib
Query Rewriting Using Monolingual Statistical Machine Translation
Stefan Riezler | Yi Liu
Computational Linguistics, Volume 36, Issue 3 - September 2010

2008

pdf bib
Translating Queries into Snippets for Improved Query Expansion
Stefan Riezler | Yi Liu | Alexander Vasserman
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
Automated Vocabulary Acquisition and Interpretation in Multimodal Conversational Systems
Yi Liu | Joyce Chai | Rong Jin
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Statistical Machine Translation for Query Expansion in Answer Retrieval
Stefan Riezler | Alexander Vasserman | Ioannis Tsochantaridis | Vibhu Mittal | Yi Liu
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2003

pdf bib
A Simplified Latent Semantic Indexing Approach for Multi-Linguistic Information Retrieval
Yi Liu | Haiming Lu | Zengxiang Lu | Pu Wang
Proceedings of the 17th Pacific Asia Conference on Language, Information and Computation