Shaojun Zhou


2025

pdf bib
UnifiedVisual: A Framework for Constructing Unified Vision-Language Datasets
Pengyu Wang | Shaojun Zhou | Chenkun Tan | Xinghao Wang | Wei Huang | Zhen Ye | Zhaowei Li | Botian Jiang | Dong Zhang | Xipeng Qiu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Unified vision large language models (VLLMs) have recently achieved impressive advancements in both multimodal understanding and generation, powering applications such as visual question answering and text-guided image synthesis. However, progress in unified VLLMs remains constrained by the lack of datasets that fully exploit the synergistic potential between these two core abilities. Existing datasets typically address understanding and generation in isolation, thereby limiting the performance of unified VLLMs. To bridge this critical gap, we introduce a novel dataset construction framework, UnifiedVisual, and present UnifiedVisual-240K, a high-quality dataset meticulously designed to facilitate mutual enhancement between multimodal understanding and generation. UnifiedVisual-240K seamlessly integrates diverse visual and textual inputs and outputs, enabling comprehensive cross-modal reasoning and precise text-to-image alignment. Our dataset encompasses a wide spectrum of tasks and data sources, ensuring rich diversity and addressing key shortcomings of prior resources. Extensive experiments demonstrate that models trained on UnifiedVisual-240K consistently achieve strong performance across a wide range of tasks. Notably, these models exhibit significant mutual reinforcement between multimodal understanding and generation, further validating the effectiveness of our framework and dataset. We believe UnifiedVisual represents a new growth point for advancing unified VLLMs and unlocking their full potential.

pdf bib
LongSafety: Enhance Safety for Long-Context LLMs
Mianqiu Huang | Xiaoran Liu | Shaojun Zhou | Mozhi Zhang | Qipeng Guo | Linyang Li | Pengyu Wang | Yang Gao | Chenkun Tan | Linlin Li | Qun Liu | Yaqian Zhou | Xipeng Qiu | Xuanjing Huang
Proceedings of the The First Workshop on LLM Security (LLMSEC)

Recent advancements in model architectures and length extrapolation techniques have significantly extended the context length of large language models (LLMs), paving the way for their application in increasingly complex tasks. However, despite the growing capabilities of long-context LLMs, the safety issues in long-context scenarios remain underexplored. While safety alignment in short context has been widely studied, the safety concerns of long-context LLMs have not been adequately addressed. In this work, we introduce ${textbf{LongSafety}}$, a comprehensive safety alignment dataset for long-context LLMs, containing 10 tasks and 17k samples, with an average length of 40.9k tokens. Our experiments demonstrate that training with LongSafety can enhance long-context safety performance while enhancing short-context safety and preserving general capabilities. Furthermore, we demonstrate that long-context safety does not equal long-context alignment with short-context safety data and LongSafety has generalizing capabilities in context length and long-context safety scenarios.