Yuchen Yan


2023

pdf
Noisy Positive-Unlabeled Learning with Self-Training for Speculative Knowledge Graph Reasoning
Ruijie Wang | Baoyu Li | Yichen Lu | Dachun Sun | Jinning Li | Yuchen Yan | Shengzhong Liu | Hanghang Tong | Tarek Abdelzaher
Findings of the Association for Computational Linguistics: ACL 2023

This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both false negative issue (i.e., potential true facts being excluded) and false positive issue (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call label posterior) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph.

pdf
A Corpus for Named Entity Recognition in Chinese Novels with Multi-genres
Hanjie Zhao | Jinge Xie | Yuchen Yan | Yuxiang Jia | Yawen Ye | Hongying Zan
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

2019

pdf
Efficient Bilingual Generalization from Neural Transduction Grammar Induction
Yuchen Yan | Dekai Wu | Serkan Kumyol
Proceedings of the 16th International Conference on Spoken Language Translation

We introduce (1) a novel neural network structure for bilingual modeling of sentence pairs that allows efficient capturing of bilingual relationship via biconstituent composition, (2) the concept of neural network biparsing, which applies to not only machine translation (MT) but also to a variety of other bilingual research areas, and (3) the concept of a biparsing-backpropagation training loop, which we hypothesize that can efficiently learn complex biparse tree patterns. Our work distinguishes from sequential attention-based models, which are more traditionally found in neural machine translation (NMT) in three aspects. First, our model enforces compositional constraints. Second, our model has a smaller search space in terms of discovering bilingual relationships from bilingual sentence pairs. Third, our model produces explicit biparse trees, which enable transparent error analysis during evaluation and external tree constraints during training.