Yushuo Chen
2025
GenSim: A General Social Simulation Platform with Large Language Model based Agents
Jiakai Tang | Heyang Gao | Xuchen Pan | Lei Wang | Haoran Tan | Dawei Gao | Yushuo Chen | Xu Chen | Yankai Lin | Yaliang Li | Bolin Ding | Jingren Zhou | Jun Wang | Ji-Rong Wen
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)
Jiakai Tang | Heyang Gao | Xuchen Pan | Lei Wang | Haoran Tan | Dawei Gao | Yushuo Chen | Xu Chen | Yankai Lin | Yaliang Li | Bolin Ding | Jingren Zhou | Jun Wang | Ji-Rong Wen
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)
With the rapid advancement of large language models (LLMs), recent years have witnessed many promising studies on leveraging LLM-based agents to simulate human social behavior. While prior work has demonstrated significant potential across various domains, much of it has focused on specific scenarios involving a limited number of agents and has lacked the ability to adapt when errors occur during simulation. To overcome these limitations, we propose a novel LLM-agent-based simulation platform called GenSim, which: (1) Abstracts a set of general functions to simplify the simulation of customized social scenarios; (2) Supports one hundred thousand agents to better simulate large-scale populations in real-world contexts; (3) Incorporates error-correction mechanisms to ensure more reliable and long-term simulations. To evaluate our platform, we assess both the efficiency of large-scale agent simulations and the effectiveness of the error-correction mechanisms. To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform based on LLM agents, promising to further advance the field of social science.
Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models
Yushuo Chen | Tianyi Tang | Erge Xiang | Linjiang Li | Xin Zhao | Jing Wang | Yunpeng Chai | Ji-Rong Wen
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Yushuo Chen | Tianyi Tang | Erge Xiang | Linjiang Li | Xin Zhao | Jing Wang | Yunpeng Chai | Ji-Rong Wen
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"In real world, large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications. For the wide application ofLLMs, the inference efficiency is an essential concern, which has been widely studied in existing work, and numerous optimization algorithms and code libraries have been proposed to improve it.Nonetheless, users still find it challenging to compare the effectiveness of all the above method sand understand the underlying mechanisms. In this work, we propose a coarse-to-fine method that encompasses both experimental and analytical components. This method can be applied across various models and inference libraries. Specifically, we examine four usage scenarios within two practical applications. We further provide both theoretical and empirical fine-grained analyses of each module in the Transformer architecture. Our methods can be a general and invaluable method for researchers to evaluate various code libraries and improve inference strategies across different LLMs. We open-source the supporting dataset, code, and evaluation scripts at the link:https://github.com/RUCAIBox/Inference-Efficiency-Evaluation."
2024
LLMBox: A Comprehensive Library for Large Language Models
Tianyi Tang | Hu Yiwen | Bingqian Li | Wenyang Luo | ZiJing Qin | Haoxiang Sun | Jiapeng Wang | Shiyi Xu | Xiaoxue Cheng | Geyang Guo | Han Peng | Bowen Zheng | Yiru Tang | Yingqian Min | Yushuo Chen | Jie Chen | Ranchi Zhao | Luran Ding | Yuhao Wang | Zican Dong | Xia Chunxuan | Junyi Li | Kun Zhou | Xin Zhao | Ji-Rong Wen
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Tianyi Tang | Hu Yiwen | Bingqian Li | Wenyang Luo | ZiJing Qin | Haoxiang Sun | Jiapeng Wang | Shiyi Xu | Xiaoxue Cheng | Geyang Guo | Han Peng | Bowen Zheng | Yiru Tang | Yingqian Min | Yushuo Chen | Jie Chen | Ranchi Zhao | Luran Ding | Yuhao Wang | Zican Dong | Xia Chunxuan | Junyi Li | Kun Zhou | Xin Zhao | Ji-Rong Wen
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
To facilitate the research on large language models (LLMs), this paper presents a comprehensive and unified library, LLMBox, to ease the development, use, and evaluation of LLMs. This library is featured with three main merits: (1) a unified data interface that supports the flexible implementation of various training strategies, (2) a comprehensive evaluation that covers extensive tasks, datasets, and models, and (3) more practical consideration, especially on user-friendliness and efficiency. With our library, users can easily reproduce existing methods, train new models, and conduct comprehensive performance comparisons. To rigorously test LLMBox, we conduct extensive experiments in a diverse coverage of evaluation settings, and experimental results demonstrate the effectiveness and efficiency of our library in supporting various implementations related to LLMs. The detailed introduction and usage guidance can be found at https://github.com/RUCAIBox/LLMBox.
2023
Learning to Imagine: Visually-Augmented Natural Language Generation
Tianyi Tang | Yushuo Chen | Yifan Du | Junyi Li | Wayne Xin Zhao | Ji-Rong Wen
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Tianyi Tang | Yushuo Chen | Yifan Du | Junyi Li | Wayne Xin Zhao | Ji-Rong Wen
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
People often imagine relevant scenes to aid in the writing process. In this work, we aim to utilize visual information for composition in the same manner as humans. We propose a method, LIVE, that makes pre-trained language models (PLMs) Learn to Imagine for Visually-augmented natural language gEneration. First, we imagine the scene based on the text: we use a diffusion model to synthesize high-quality images conditioned on the input texts. Second, we use CLIP to determine whether the text can evoke the imagination in a posterior way. Finally, our imagination is dynamic, and we conduct synthesis for each sentence rather than generate only one image for an entire paragraph. Technically, we propose a novel plug-and-play fusion layer to obtain visually-augmented representations for each text. Our vision-text fusion layer is compatible with Transformer-based architecture. We have conducted extensive experiments on four generation tasks using BART and T5, and the automatic results and human evaluation demonstrate the effectiveness of our proposed method. We will release the code, model, and data at the link: https://github.com/RUCAIBox/LIVE.
Enhancing Scalability of Pre-trained Language Models via Efficient Parameter Sharing
Peiyu Liu | Ze-Feng Gao | Yushuo Chen | Xin Zhao | Ji-Rong Wen
Findings of the Association for Computational Linguistics: EMNLP 2023
Peiyu Liu | Ze-Feng Gao | Yushuo Chen | Xin Zhao | Ji-Rong Wen
Findings of the Association for Computational Linguistics: EMNLP 2023
In this paper, we propose a highly parameter-efficient approach to scaling pre-trained language models (PLMs) to a deeper model depth. Unlike prior work that shares all parameters or uses extra blocks, we design a more capable parameter-sharing architecture based on matrix product operator (MPO), an efficient tensor decomposition method to factorize the parameter matrix into a set of local tensors. Based on such a decomposition, we share the important local tensor across all layers for reducing the model size and meanwhile keep layer-specific tensors (also using Adapters) for enhancing the adaptation flexibility. To improve the model training, we further propose a stable initialization algorithm tailored for the MPO-based architecture. Extensive experiments have demonstrated the effectiveness of our proposed model in enhancing scalability and achieving higher performance (i.e., with fewer parameters than BERT-base, we successfully scale the model depth by a factor of 4x and even achieve 0.1 points higher than BERT-large for GLUE score). The code to reproduce the results of this paper can be found at https://github.com/RUCAIBox/MPOBERT-code.
Search
Fix author
Co-authors
- Ji-Rong Wen 5
- Tianyi Tang 3
- Junyi Li 2
- Wayne Xin Zhao 2
- Xin Zhao 2
- Yunpeng Chai 1
- Xu Chen 1
- Jie Chen 1
- Xiaoxue Cheng 1
- Xia Chunxuan 1
- Bolin Ding 1
- Luran Ding 1
- Zican Dong 1
- Yifan Du 1
- Heyang Gao 1
- Dawei Gao 1
- Ze-Feng Gao 1
- Geyang Guo 1
- Yaliang Li 1
- Linjiang Li 1
- Bingqian Li 1
- Yankai Lin (林衍凯) 1
- Peiyu Liu 1
- Wenyang Luo 1
- Yingqian Min 1
- Xuchen Pan 1
- Han Peng 1
- ZiJing Qin 1
- Haoxiang Sun 1
- Haoran Tan 1
- Jiakai Tang 1
- Yiru Tang 1
- Lei Wang 1
- Jun Wang 1
- Jing Wang 1
- Jiapeng Wang 1
- Yuhao Wang 1
- Erge Xiang 1
- Shiyi Xu 1
- Hu Yiwen 1
- Ranchi Zhao 1
- Bowen Zheng 1
- Jingren Zhou 1
- Kun Zhou 1