Min Gui
2021
MinD at SemEval-2021 Task 6: Propaganda Detection using Transfer Learning and Multimodal Fusion
Junfeng Tian
|
Min Gui
|
Chenliang Li
|
Ming Yan
|
Wenming Xiao
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
We describe our systems of subtask1 and subtask3 for SemEval-2021 Task 6 on Detection of Persuasion Techniques in Texts and Images. The purpose of subtask1 is to identify propaganda techniques given textual content, and the goal of subtask3 is to detect them given both textual and visual content. For subtask1, we investigate transfer learning based on pre-trained language models (PLMs) such as BERT, RoBERTa to solve data sparsity problems. For subtask3, we extract heterogeneous visual representations (i.e., face features, OCR features, and multimodal representations) and explore various multimodal fusion strategies to combine the textual and visual representations. The official evaluation shows our ensemble model ranks 1st for subtask1 and 2nd for subtask3.
2019
Attention Optimization for Abstractive Document Summarization
Min Gui
|
Junfeng Tian
|
Rui Wang
|
Zhenglu Yang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Attention plays a key role in the improvement of sequence-to-sequence-based document summarization models. To obtain a powerful attention helping with reproducing the most salient information and avoiding repetitions, we augment the vanilla attention model from both local and global aspects. We propose attention refinement unit paired with local variance loss to impose supervision on the attention model at each decoding step, and we also propose a global variance loss to optimize the attention distributions of all decoding steps from the global perspective. The performances on CNN/Daily Mail dataset verify the effectiveness of our methods.
Search
Co-authors
- Junfeng Tian 2
- Rui Wang 1
- Zhenglu Yang 1
- Chenliang Li 1
- Ming Yan 1
- show all...