Geng Zhao
2025
Cross-lingual Social Misinformation Detector based on Hierarchical Mixture-of-Experts Adapter
Haofang Fan
|
Xiran Hu
|
Geng Zhao
Proceedings of the 31st International Conference on Computational Linguistics
The spread of social misinformation has been a global concern, particularly affecting non-native speaker users who are more susceptible to misinformation on foreign social media platforms. In light of this, this study focuses on mitigating the challenges faced by social misinformation detectors in quickly regaining capability after crossing linguistic borders, especially for non-native users with only monolingual social media histories. By integrating sentiment analysis as an auxiliary, less sensitive task, we transform the challenging cross-lingual transfer into a manageable multi-task framework. Then, we propose HierMoE-Adpt, a novel, cost-effective parameter efficient finetuning method based on hierarchical mixture-of-experts adaptation, to enhance cross-lingual social misinformation detection. HierMoE-Adpt includes a hierarchical routing strategy and an expert-mask mechanism, effectively merge knowledge about the understanding posts in new language and misinformation detection capabilities, contributing to recover the performance of personal misinformation detectors in sync with the dynamics of personal international travel.
Multilingual Federated Low-Rank Adaptation for Collaborative Content Anomaly Detection across Multilingual Social Media Participants
Jiaxin Li
|
Geng Zhao
|
Xiaoci Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recently, the rapid development of multilingual social media platforms (SNS) exacerbates new challenges in SNS content anomaly detection due to data islands and linguistic imbalance. While federated learning (FL) and parameter-efficient fine-tuning (PEFT) offer potential solutions in most cases, when every client is multilingual, existing solutions struggle with multilingual heterogeneity: 1) entangled language-specific knowledge during aggregation, 2) noise from minority languages, and 3) unstable cross-platform collaboration. Based on the asymmetric nature of LoRA, we propose MuLA-F, a multilingual Federated LoRA introducing SVD-based language-specific disentanglement of LoRA blocks and a local orthogonal tuning strategy. Evaluations across three SNS content anomaly detection tasks demonstrate MuLA-F’s superiority in multilingual performance while reducing multilingual knowledge conflicts and communication rounds.