Jing Zhu


2021

pdf
NegatER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases
Tara Safavi | Jing Zhu | Danai Koutra
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Codifying commonsense knowledge in machines is a longstanding goal of artificial intelligence. Recently, much progress toward this goal has been made with automatic knowledge base (KB) construction techniques. However, such techniques focus primarily on the acquisition of positive (true) KB statements, even though negative (false) statements are often also important for discriminative reasoning over commonsense KBs. As a first step toward the latter, this paper proposes NegatER, a framework that ranks potential negatives in commonsense KBs using a contextual language model (LM). Importantly, as most KBs do not contain negatives, NegatER relies only on the positive knowledge in the LM and does not require ground-truth negative examples. Experiments demonstrate that, compared to multiple contrastive data augmentation approaches, NegatER yields negatives that are more grammatical, coherent, and informative—leading to statistically significant accuracy improvements in a challenging KB completion task and confirming that the positive knowledge in LMs can be “re-purposed” to generate negative knowledge.

2006

pdf
An HMM-Based Approach to Automatic Phrasing for Mandarin Text-to-Speech Synthesis
Jing Zhu | Jian-Hua Li
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions