Shujian Zhang


2022

pdf
ALLSH: Active Learning Guided by Local Sensitivity and Hardness
Shujian Zhang | Chengyue Gong | Xingchao Liu | Pengcheng He | Weizhu Chen | Mingyuan Zhou
Findings of the Association for Computational Linguistics: NAACL 2022

Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data. In this work, we propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function. The proposed method generates data copies through local perturbations and selects data points whose predictive likelihoods diverge the most from their copies. We further empower our acquisition function by injecting the select-worst case perturbation. Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks. Furthermore, we observe consistent improvements over the baselines on the study of prompt selection in prompt-based few-shot learning. These experiments demonstrate that our acquisition guided by local sensitivity and hardness can be effective and beneficial for many NLP tasks.

2021

pdf
Knowing More About Questions Can Help: Improving Calibration in Question Answering
Shujian Zhang | Chengyue Gong | Eunsol Choi
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Learning with Different Amounts of Annotation: From Zero to Many Labels
Shujian Zhang | Chengyue Gong | Eunsol Choi
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Training NLP systems typically assumes access to annotated data that has a single human label per example. Given imperfect labeling from annotators and inherent ambiguity of language, we hypothesize that single label is not sufficient to learn the spectrum of language interpretation. We explore new annotation distribution schemes, assigning multiple labels per example for a small subset of training examples. Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples. Extending a MixUp data augmentation framework, we propose a learning algorithm that can learn from training examples with different amount of annotation (with zero, one, or multiple labels). This algorithm efficiently combines signals from uneven training data and brings additional gains in low annotation budget and cross domain settings. Together, our method achieves consistent gains in two tasks, suggesting distributing labels unevenly among training examples can be beneficial for many NLP tasks.