2025
pdf
bib
abs
Towards Multi-System Log Anomaly Detection
Boyang Wang
|
Runqiang Zang
|
Hongcheng Guo
|
Shun Zhang
|
Shaosheng Cao
|
Donglin Di
|
Zhoujun Li
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Despite advances in unsupervised log anomaly detection, current models require dataset-specific training, causing costly procedures, limited scalability, and performance bottlenecks. Furthermore, numerous models lack cognitive reasoning abilities, limiting their transferability to similar systems. Additionally, these models often encounter the **“identical shortcut”** predicament, erroneously predicting normal classes when confronted with rare anomaly logs due to reconstruction errors. To address these issues, we propose **MLAD**, a novel **M**ulti-system **L**og **A**nomaly **D**etection model incorporating semantic relational reasoning. Specifically, we extract cross-system semantic patterns and encode them as high-dimensional learnable vectors. Subsequently, we revamp attention formulas to discern keyword significance and model the overall distribution through vector space diffusion. Lastly, we employ a Gaussian mixture model to highlight rare word uncertainty, optimizing the vector space with maximum expectation. Experiments on real-world datasets demonstrate the superiority of MLAD.
pdf
bib
abs
RedOne: Revealing Domain-specific LLM Post-Training in Social Networking Services
Fei Zhao
|
Chonggang Lu
|
Wangyue
|
Zheyong Xie
|
Ziyan Liu
|
Haofu Qian
|
Jianzhao Huang
|
Fangcheng Shi
|
Zijie Meng
|
Hongcheng Guo
|
Mingqian He
|
Xinze Lyu
|
Zheyu Ye
|
Weiting Liu
|
Boyang Wang
|
Shaosheng Cao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
As a primary medium for modern information dissemination, social networking services (SNS) have experienced rapid growth, which has proposed significant challenges for platform content management and interaction quality improvement. Recently, the development of large language models (LLMs) has offered potential solutions but existing studies focus on isolated tasks, which not only encounter diminishing benefit from the data scaling within individual scenarios but also fail to flexibly adapt to diverse real-world context. To address these challenges, we introduce RedOne, a domain-specific LLM designed to break the performance bottleneck of single-task baselines and establish a comprehensive foundation for the SNS. RedOne was developed through a three-stage training strategy consisting of continue pretraining, supervised fine-tuning, and preference optimization, using a large-scale real-world dataset. Through extensive experiments, RedOne maintains strong general capabilities, and achieves an average improvement up to 14.02% across 8 major SNS tasks and 7.56% in SNS bilingual evaluation benchmark, compared with base models. Furthermore, through online testing, RedOne reduced the exposure rate in harmful content detection by 11.23% and improved the click page rate in post-view search by 14.95% compared with single-tasks baseline models. These results establish RedOne as a robust domain-specific LLM for SNS, demonstrating excellent generalization across various tasks and promising applicability in real-world scenarios.
2023
pdf
bib
abs
M2C: Towards Automatic Multimodal Manga Complement
Hongcheng Guo
|
Boyang Wang
|
Jiaqi Bai
|
Jiaheng Liu
|
Jian Yang
|
Zhoujun Li
Findings of the Association for Computational Linguistics: EMNLP 2023
Multimodal manga analysis focuses on enhancing manga understanding with visual and textual features, which has attracted considerable attention from both natural language processing and computer vision communities. Currently, most comics are hand-drawn and prone to problems such as missing pages, text contamination, and text aging, resulting in missing comic text content and seriously hindering human comprehension. In other words, the Multimodal Manga Complement (M2C) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for vision and language understanding. To this end, we first propose the Multimodal Manga Complement task by establishing a new M2C benchmark dataset covering two languages. First, we design a manga argumentation method called MCoT to mine event knowledge in comics with large language models. Then, an effective baseline FVP-M2 using fine-grained visual prompts is proposed to support manga complement. Extensive experimental results show the effectiveness of FVP-M2 method for Multimodal Mange Complement.