Xu Jinan


2022

pdf
Towards Making the Most of Pre-trained Translation Model for Quality Estimation
Li Chunyou | Di Hui | Huang Hui | Ouchi Kazushige | Chen Yufeng | Liu Jian | Xu Jinan
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“Machine translation quality estimation (QE) aims to evaluate the quality of machine translation automatically without relying on any reference. One common practice is applying the translation model as a feature extractor. However, there exist several discrepancies between the translation model and the QE model. The translation model is trained in an autoregressive manner, while the QE model is performed in a non-autoregressive manner. Besides, the translation model only learns to model human-crafted parallel data, while the QE model needs to model machinetranslated noisy data. In order to bridge these discrepancies, we propose two strategies to posttrain the translation model, namely Conditional Masked Language Modeling (CMLM) and Denoising Restoration (DR). Specifically, CMLM learns to predict masked tokens at the target side conditioned on the source sentence. DR firstly introduces noise to the target side of parallel data, and the model is trained to detect and recover the introduced noise. Both strategies can adapt the pre-trained translation model to the QE-style prediction task. Experimental results show that our model achieves impressive results, significantly outperforming the baseline model, verifying the effectiveness of our proposed methods.”

pdf
Supervised Contrastive Learning for Cross-lingual Transfer Learning
Wang Shuaibo | Di Hui | Huang Hui | Lai Siyu | Ouchi Kazushige | Chen Yufeng | Xu Jinan
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“Multilingual pre-trained representations are not well-aligned by nature, which harms their performance on cross-lingual tasks. Previous methods propose to post-align the multilingual pretrained representations by multi-view alignment or contrastive learning. However, we argue that both methods are not suitable for the cross-lingual classification objective, and in this paper we propose a simple yet effective method to better align the pre-trained representations. On the basis of cross-lingual data augmentations, we make a minor modification to the canonical contrastive loss, to remove false-negative examples which should not be contrasted. Augmentations with the same class are brought close to the anchor sample, and augmentations with different class are pushed apart. Experiment results on three cross-lingual tasks from XTREME benchmark show our method could improve the transfer performance by a large margin with no additional resource needed. We also provide in-detail analysis and comparison between different post-alignment strategies.”

2021

pdf
LRRA:A Transparent Neural-Symbolic Reasoning Framework for Real-World Visual Question Answering
Wan Zhang | Chen Keming | Zhang Yujie | Xu Jinan | Chen Yufeng
Proceedings of the 20th Chinese National Conference on Computational Linguistics

The predominant approach of visual question answering (VQA) relies on encoding the imageand question with a ”black box” neural encoder and decoding a single token into answers suchas ”yes” or ”no”. Despite this approach’s strong quantitative results it struggles to come up withhuman-readable forms of justification for the prediction process. To address this insufficiency we propose LRRA[LookReadReasoningAnswer]a transparent neural-symbolic framework forvisual question answering that solves the complicated problem in the real world step-by-steplike humans and provides human-readable form of justification at each step.Specifically LRRAlearns to first convert an image into a scene graph and parse a question into multiple reasoning instructions. It then executes the reasoning instructions one at a time by traversing the scenegraph using a recurrent neural-symbolic execution module.Finally it generates answers to the given questions and makes corresponding marks on the image. Furthermore we believe that the relations between objects in the question is of great significance for obtaining the correct answerso we create a perturbed GQA test set by removing linguistic cues (attributes and relations) in the questions to analyze which part of the question contributes more to the answer.Our experimentson the GQA dataset show that LRRA is significantly better than the existing representative model(57.12% vs. 56.39%). Our experiments on the perturbed GQA test set show that the relations between objects is more important for answering complicated questions than the attributes ofobjects.Keywords:Visual Question Answering Relations Between Objects Neural-Symbolic Reason-ing.

pdf
Improving Entity Linking by Encoding Type Information into Entity Embeddings
Li Tianran | Yang Erguang | Zhang Yujie | Chen Yufeng | Xu Jinan
Proceedings of the 20th Chinese National Conference on Computational Linguistics

Entity Linking (EL) refers to the task of linking entity mentions in the text to the correct entities inthe Knowledge Base (KB) in which entity embeddings play a vital and challenging role because of the subtle differences between entities. However existing pre-trained entity embeddings onlylearn the underlying semantic information in texts yet the fine-grained entity type informationis ignored which causes the type of the linked entity is incompatible with the mention context.In order to solve this problem we propose to encode fine-grained type information into entity embeddings. We firstly pre-train word vectors to inject type information by embedding wordsand fine-grained entity types into the same vector space. Then we retrain entity embeddings withword vectors containing fine-grained type information. By applying our entity embeddings to twoexisting EL models our method respectively achieves 0.82% and 0.42% improvement on average F1 score of the test sets. Meanwhile our method is model-irrelevant which means it can helpother EL models.

pdf
Improving Low-Resource Named Entity Recognition via Label-Aware Data Augmentation and Curriculum Denoising
Zhu Wenjing | Liu Jian | Xu Jinan | Chen Yufeng | Zhang Yujie
Proceedings of the 20th Chinese National Conference on Computational Linguistics

Deep neural networks have achieved state-of-the-art performances on named entity recognition(NER) with sufficient training data while they perform poorly in low-resource scenarios due to data scarcity. To solve this problem we propose a novel data augmentation method based on pre-trained language model (PLM) and curriculum learning strategy. Concretely we use the PLMto generate diverse training instances through predicting different masked words and design atask-specific curriculum learning strategy to alleviate the influence of noises. We evaluate the effectiveness of our approach on three datasets: CoNLL-2003 OntoNotes5.0 and MaScip of which the first two are simulated low-resource scenarios and the last one is a real low-resource dataset in material science domain. Experimental results show that our method consistently outperform the baseline model. Specifically our method achieves an absolute improvement of3.46% F1 score on the 1% CoNLL-2003 2.58% on the 1% OntoNotes5.0 and 0.99% on the full of MaScip.