Rui Cao


2022

pdf
Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Rui Cao | Yihao Wang | Yuxin Liang | Ling Gao | Jie Zheng | Jie Ren | Zheng Wang
Findings of the Association for Computational Linguistics: ACL 2022

Contrastive learning is emerging as a powerful technique for extracting knowledge from unlabeled data. This technique requires a balanced mixture of two ingredients: positive (similar) and negative (dissimilar) samples. This is typically achieved by maintaining a queue of negative samples during training. Prior works in the area typically uses a fixed-length negative sample queue, but how the negative sample size affects the model performance remains unclear. The opaque impact of the number of negative samples on performance when employing contrastive learning aroused our in-depth exploration. This paper presents a momentum contrastive learning model with negative sample queue for sentence embedding, namely MoCoSE. We add the prediction layer to the online branch to make the model asymmetric and together with EMA update mechanism of the target branch to prevent the model from collapsing. We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples. Our experiments find that the best results are obtained when the maximum traceable distance is at a certain range, demonstrating that there is an optimal range of historical information for a negative sample queue. We evaluate the proposed unsupervised MoCoSE on the semantic text similarity (STS) task and obtain an average Spearman’s correlation of 77.27%. Source code is available here.

2021

pdf
Holistic interpretation in locative alternation – Evidence from self-paced reading
Rui Cao
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

2020

pdf
HateGAN: Adversarial Generative-Based Data Augmentation for Hate Speech Detection
Rui Cao | Roy Ka-Wei Lee
Proceedings of the 28th International Conference on Computational Linguistics

Academia and industry have developed machine learning and natural language processing models to detect online hate speech automatically. However, most of these existing methods adopt a supervised approach that heavily depends on labeled datasets for training. This results in the methods’ poor detection performance of the hate speech class as the training datasets are highly imbalanced. In this paper, we propose HateGAN, a deep generative reinforcement learning model, which addresses the challenge of imbalance class by augmenting the dataset with hateful tweets. We conduct extensive experiments to augment two commonly-used hate speech detection datasets with the HateGAN generated tweets. Our experiment results show that HateGAN improves the detection performance of the hate speech class regardless of the classifiers and datasets used in the detection task. Specifically, we observe an average 5% improvement for the hate class F1 scores across all state-of-the-art hate speech classifiers. We also conduct case studies to empirically examine the HateGAN generated hate speeches and show that the generated tweets are diverse, coherent, and relevant to hate speech detection.

pdf
Evaluation of Pretrained BERT Model by Using Sentence Clustering
Naoki Shibayama | Rui Cao | Jing Bai | Wen Ma | Hiroyuki Shinnou
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation