Peng Yang


2022

pdf
Continual Learning for Natural Language Generations with Transformer Calibration
Peng Yang | Dingcheng Li | Ping Li
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)

Conventional natural language process (NLP) generation models are trained offline with a given dataset for a particular task, which is referred to as isolated learning. Research on sequence-to-sequence language generation aims to study continual learning model to constantly learning from sequentially encountered tasks. However, continual learning studies often suffer from catastrophic forgetting, a persistent challenge for lifelong learning. In this paper, we present a novel NLP transformer model that attempts to mitigate catastrophic forgetting in online continual learning from a new perspective, i.e., attention calibration. We model the attention in the transformer as a calibrated unit in a general formulation, where the attention calibration could give benefits to balance the stability and plasticity of continual learning algorithms through influencing both their forward inference path and backward optimization path. Our empirical experiments, paraphrase generation and dialog response generation, demonstrate that this work outperforms state-of-the-art models by a considerable margin and effectively mitigate the forgetting.

pdf
Keyphrase Generation via Soft and Hard Semantic Corrections
Guangzhen Zhao | Guoshun Yin | Peng Yang | Yu Yao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Keyphrase generation aims to generate a set of condensed phrases given a source document. Although maximum likelihood estimation (MLE) based keyphrase generation methods have shown impressive performance, they suffer from the bias on the source-prediction sequence pair and the bias on the prediction-target pair. To tackle the above biases, we propose a novel correction model CorrKG on top of the MLE pipeline, where the biases are corrected via the optimal transport (OT) and a frequency-based filtering-and-sorting (FreqFS) strategy. Specifically, OT is introduced as soft correction to facilitate the alignment of salient information and rectify the semantic bias in the source document and predicted keyphrases pair. An adaptive semantic mass learning scheme is conducted on the vanilla OT to achieve a proper pair-wise optimal transport procedure, which promotes the OT learning brought by rectifying semantic masses dynamically. Besides, the FreqFS strategy is designed as hard correction to reduce the bias of predicted and ground truth keyphrases, and thus to generate accurate and sufficient keyphrases. Extensive experiments over multiple benchmark datasets show that our model achieves superior keyphrase generation as compared with the state-of-the-arts.

pdf
Table-based Fact Verification with Self-labeled Keypoint Alignment
Guangzhen Zhao | Peng Yang
Proceedings of the 29th International Conference on Computational Linguistics

Table-based fact verification aims to verify whether a statement sentence is trusted or fake. Most existing methods rely on graph feature or data augmentation but fail to investigate evidence correlation between the statement and table effectively. In this paper, we propose a self-Labeled Keypoint Alignment model, named LKA, to explore the correlation between the two. Specifically, a dual-view alignment module based on the statement and table views is designed to discriminate the salient words through multiple interactions, where one regular and one adversarial alignment network cooperatively character the alignment discrepancy. Considering the interaction characteristic inherent in the alignment module, we introduce a novel mixture-of experts block to elaborately integrate the interacted information for supporting the alignment and final classification. Furthermore, a contrastive learning loss is utilized to learn the precise representation of the structure-involved words, encouraging the words closer to words with the same table attribute and farther from the words with the unrelated attribute. Experimental results on three widely-studied datasets show that our model can outperform the state-of-the-art baselines and capture interpretable evidence words.

2016

pdf
Deceptive Review Spam Detection via Exploiting Task Relatedness and Unlabeled Data
Zhen Hai | Peilin Zhao | Peng Cheng | Peng Yang | Xiao-Li Li | Guangxia Li
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2012

pdf
Recognition of Nonmanual Markers in American Sign Language (ASL) Using Non-Parametric Adaptive 2D-3D Face Tracking
Dimitris Metaxas | Bo Liu | Fei Yang | Peng Yang | Nicholas Michael | Carol Neidle
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework are the following: (1) We have built a stochastic and adaptive ensemble of face trackers to address factors resulting in lost face track; (2) We combine 2D and 3D deformable face models to warp input frames, thus correcting for any variation in facial appearance resulting from changes in 3D head pose; (3) We use a combination of geometric features and texture features extracted from a canonical frontal representation. The proposed new framework makes it possible to detect grammatically significant nonmanual expressions from continuous signing and to differentiate successfully among linguistically significant expressions that involve subtle differences in appearance. We present results that are based on the use of a dataset containing 330 sentences from videos that were collected and linguistically annotated at Boston University.