Ran Li
2021
STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media
Dongning Rao
|
Xin Miao
|
Zhihua Jiang
|
Ran Li
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Rumor detection on social media puts pre-trained language models (LMs), such as BERT, and auxiliary features, such as comments, into use. However, on the one hand, rumor detection datasets in Chinese companies with comments are rare; on the other hand, intensive interaction of attention on Transformer-based models like BERT may hinder performance improvement. To alleviate these problems, we build a new Chinese microblog dataset named Weibo20 by collecting posts and associated comments from Sina Weibo and propose a new ensemble named STANKER (Stacking neTwork bAsed-on atteNtion-masKed BERT). STANKER adopts two level-grained attention-masked BERT (LGAM-BERT) models as base encoders. Unlike the original BERT, our new LGAM-BERT model takes comments as important auxiliary features and masks co-attention between posts and comments on lower-layers. Experiments on Weibo20 and three existing social media datasets showed that STANKER outperformed all compared models, especially beating the old state-of-the-art on Weibo dataset.
2016
AMR Parsing with an Incremental Joint Model
Junsheng Zhou
|
Feiyu Xu
|
Hans Uszkoreit
|
Weiguang Qu
|
Ran Li
|
Yanhui Gu
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Search
Co-authors
- Junsheng Zhou (周俊生) 1
- Feiyu Xu 1
- Hans Uszkoreit 1
- Weiguang Qu 1
- Yanhui Gu (顾彦慧) 1
- show all...