Yao Meng


2022

pdf
Learning Structural Information for Syntax-Controlled Paraphrase Generation
Erguang Yang | Chenglin Bai | Deyi Xiong | Yujie Zhang | Yao Meng | Jinan Xu | Yufeng Chen
Findings of the Association for Computational Linguistics: NAACL 2022

Syntax-controlled paraphrase generation aims to produce paraphrase conform to given syntactic patterns. To address this task, recent works have started to use parse trees (or syntactic templates) to guide generation.A constituency parse tree contains abundant structural information, such as parent-child relation, sibling relation, and the alignment relation between words and nodes.Previous works have only utilized parent-child and alignment relations, which may affect the generation quality.To address this limitation, we propose a Structural Information-augmented Syntax-Controlled Paraphrasing (SI-SCP) model. Particularly, we design a syntax encoder based on tree-transformer to capture parent-child and sibling relations. To model the alignment relation between words and nodes, we propose an attention regularization objective, which makes the decoder accurately select corresponding syntax nodes to guide the generation of words.Experiments show that SI-SCP achieves state-of-the-art performances in terms of semantic and syntactic quality on two popular benchmark datasets.Additionally, we propose a Syntactic Template Retriever (STR) to retrieve compatible syntactic structures. We validate that STR is capable of retrieving compatible syntactic structures. We further demonstrate the effectiveness of SI-SCP to generate diverse paraphrases with retrieved syntactic structures.

2021

pdf
Syntactically-Informed Unsupervised Paraphrasing with Non-Parallel Data
Erguang Yang | Mingtong Liu | Deyi Xiong | Yujie Zhang | Yao Meng | Changjian Hu | Jinan Xu | Yufeng Chen
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Previous works on syntactically controlled paraphrase generation heavily rely on large-scale parallel paraphrase data that is not easily available for many languages and domains. In this paper, we take this research direction to the extreme and investigate whether it is possible to learn syntactically controlled paraphrase generation with nonparallel data. We propose a syntactically-informed unsupervised paraphrasing model based on conditional variational auto-encoder (VAE) which can generate texts in a specified syntactic structure. Particularly, we design a two-stage learning method to effectively train the model using non-parallel data. The conditional VAE is trained to reconstruct the input sentence according to the given input and its syntactic structure. Furthermore, to improve the syntactic controllability and semantic consistency of the pre-trained conditional VAE, we fine-tune it using syntax controlling and cycle reconstruction learning objectives, and employ Gumbel-Softmax to combine these new learning objectives. Experiment results demonstrate that the proposed model trained only on non-parallel data is capable of generating diverse paraphrases with specified syntactic structure. Additionally, we validate the effectiveness of our method for generating syntactically adversarial examples on the sentiment analysis task.

2020

pdf
A Learning-Exploring Method to Generate Diverse Paraphrases with Multi-Objective Deep Reinforcement Learning
Mingtong Liu | Erguang Yang | Deyi Xiong | Yujie Zhang | Yao Meng | Changjian Hu | Jinan Xu | Yufeng Chen
Proceedings of the 28th International Conference on Computational Linguistics

Paraphrase generation (PG) is of great importance to many downstream tasks in natural language processing. Diversity is an essential nature to PG for enhancing generalization capability and robustness of downstream applications. Recently, neural sequence-to-sequence (Seq2Seq) models have shown promising results in PG. However, traditional model training for PG focuses on optimizing model prediction against single reference and employs cross-entropy loss, which objective is unable to encourage model to generate diverse paraphrases. In this work, we present a novel approach with multi-objective learning to PG. We propose a learning-exploring method to generate sentences as learning objectives from the learned data distribution, and employ reinforcement learning to combine these new learning objectives for model training. We first design a sample-based algorithm to explore diverse sentences. Then we introduce several reward functions to evaluate the sampled sentences as learning signals in terms of expressive diversity and semantic fidelity, aiming to generate diverse and high-quality paraphrases. To effectively optimize model performance satisfying different evaluating aspects, we use a GradNorm-based algorithm that automatically balances these training objectives. Experiments and analyses on Quora and Twitter datasets demonstrate that our proposed method not only gains a significant increase in diversity but also improves generation quality over several state-of-the-art baselines.

pdf
Balanced Joint Adversarial Training for Robust Intent Detection and Slot Filling
Xu Cao | Deyi Xiong | Chongyang Shi | Chao Wang | Yao Meng | Changjian Hu
Proceedings of the 28th International Conference on Computational Linguistics

Joint intent detection and slot filling has recently achieved tremendous success in advancing the performance of utterance understanding. However, many joint models still suffer from the robustness problem, especially on noisy inputs or rare/unseen events. To address this issue, we propose a Joint Adversarial Training (JAT) model to improve the robustness of joint intent detection and slot filling, which consists of two parts: (1) automatically generating joint adversarial examples to attack the joint model, and (2) training the model to defend against the joint adversarial examples so as to robustify the model on small perturbations. As the generated joint adversarial examples have different impacts on the intent detection and slot filling loss, we further propose a Balanced Joint Adversarial Training (BJAT) model that applies a balance factor as a regularization term to the final loss function, which yields a stable training procedure. Extensive experiments and analyses on the lightweight models show that our proposed methods achieve significantly higher scores and substantially improve the robustness of both intent detection and slot filling. In addition, the combination of our BJAT with BERT-large achieves state-of-the-art results on two datasets.

pdf bib
Bootstrapping Named Entity Recognition in E-Commerce with Positive Unlabeled Learning
Hanchu Zhang | Leonhard Hennig | Christoph Alt | Changjian Hu | Yao Meng | Chao Wang
Proceedings of the 3rd Workshop on e-Commerce and NLP

In this work, we introduce a bootstrapped, iterative NER model that integrates a PU learning algorithm for recognizing named entities in a low-resource setting. Our approach combines dictionary-based labeling with syntactically-informed label expansion to efficiently enrich the seed dictionaries. Experimental results on a dataset of manually annotated e-commerce product descriptions demonstrate the effectiveness of the proposed framework.

2016

pdf
A Distribution-based Model to Learn Bilingual Word Embeddings
Hailong Cao | Tiejun Zhao | Shu Zhang | Yao Meng
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued low-dimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.

pdf
Automatic Identifying Entity Type in Linked Data
Qingliang Miao | Ruiyu Fang | Shuangyong Song | Zhongguang Zheng | Lu Fang | Yao Meng | Jun Sun
Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Posters

2013

pdf
Cross-Lingual Link Discovery between Chinese and English Wiki Knowledge Bases
Qingliang Miao | Huayu Lu | Shu Zhang | Yao Meng
Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC 27)

pdf
Semi-supervised Classification of Twitter Messages for Organization Name Disambiguation
Shu Zhang | Jianwei Wu | Dequan Zheng | Yao Meng | Hao Yu
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf
Improving Chinese-to-Japanese Patent Translation Using English as Pivot Language
Xianhua Li | Yao Meng | Hao Yu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf
An Adaptive Method for Organization Name Disambiguation with Feature Reinforcing
Shu Zhang | Jianwei Wu | Dequan Zheng | Yao Meng | Hao Yu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

2011

pdf
Maximum Entropy Based Lexical Reordering Model for Hierarchical Phrase-based Machine Translation
Zhongguang Zheng | Yao Meng | Hao Yu
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf
Lexical-based Reordering Model for Hierarchical Phrase-based Machine Translation
Zhongguang Zheng | Yao Meng | Hao Yu
Proceedings of Machine Translation Summit XIII: Papers

pdf
Feedback Selecting of Manually Acquired Rules Using Automatic Evaluation
Xianhua Li | Yajuan Lü | Yao Meng | Qun Liu | Hao Yu
Proceedings of the 4th Workshop on Patent Translation

2010

pdf
Maximum Entropy Based Phrase Reordering for Hierarchical Phrase-Based Translation
Zhongjun He | Yao Meng | Hao Yu
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Learning Phrase Boundaries for Hierarchical Phrase-based Translation
Zhongjun He | Yao Meng | Hao Yu
Coling 2010: Posters

pdf
Fault-Tolerant Learning for Term Extraction
Yuhang Yang | Hao Yu | Yao Meng | Yingliang Lu | Yingju Xia
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

pdf
Extracting Product Features and Sentiments from Chinese Customer Reviews
Shu Zhang | Wenjie Jia | Yingju Xia | Yao Meng | Hao Yu
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

With the growing interest in opinion mining from web data, more works are focused on mining in English and Chinese reviews. Probing into the problem of product opinion mining, this paper describes the details of our language resources, and imports them into the task of extracting product feature and sentiment task. Different from the traditional unsupervised methods, a supervised method is utilized to identify product features, combining the domain knowledge and lexical information. Nearest vicinity match and syntactic tree based methods are proposed to identify the opinions regarding the product features. Multi-level analysis module is proposed to determine the sentiment orientation of the opinions. With the experiments on the electronic reviews of COAE 2008, the validities of the product features identified by CRFs and the two opinion words identified methods are testified and compared. The results show the resource is well utilized in this task and our proposed method is valid.

pdf
Extending the Hierarchical Phrase Based Model with Maximum Entropy Based BTG
Zhongjun He | Yao Meng | Hao Yu
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

In the hierarchical phrase based (HPB) translation model, in addition to hierarchical phrase pairs extracted from bi-text, glue rules are used to perform serial combination of phrases. However, this basic method for combining phrases is not sufficient for phrase reordering. In this paper, we extend the HPB model with maximum entropy based bracketing transduction grammar (BTG), which provides content-dependent combination of neighboring phrases in two ways: serial or inverse. Experimental results show that the extended HPB system achieves absolute improvements of 0.9∼1.8 BLEU points over the baseline for large-scale translation tasks.

2009

pdf
Reducing SMT Rule Table with Monolingual Key Phrase
Zhongjun He | Yao Meng | Yajuan Lü | Hao Yu | Qun Liu
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

pdf
A Bootstrapping Method for Finer-Grained Opinion Mining Using Graph Model
Shu Zhang | Yingju Xia | Yao Meng | Hao Yu
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2

2007

pdf
CHAT to Your Destination
Fuliang Weng | Baoshi Yan | Zhe Feng | Florin Ratiu | Madhuri Raya | Brian Lathrop | Annie Lien | Sebastian Varges | Rohit Mishra | Feng Lin | Matthew Purver | Harry Bratt | Yao Meng | Stanley Peters | Tobias Scheideck | Badri Raghunathan | Zhaoxia Zhang
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

pdf
A Conversational In-Car Dialog System
Baoshi Yan | Fuliang Weng | Zhe Feng | Florin Ratiu | Madhuri Raya | Yao Meng | Sebastian Varges | Matthew Purver | Annie Lien | Tobias Scheideck | Badri Raghunathan | Feng Lin | Rohit Mishra | Brian Lathrop | Zhaoxia Zhang | Harry Bratt | Stanley Peters
Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)

2005

pdf
A Lexicon-Constrained Character Model for Chinese Morphological Analysis
Yao Meng | Hao Yu | Fumihito Nishino
Second International Joint Conference on Natural Language Processing: Full Papers