Yiting Ran


2025

pdf bib
BOOKWORLD: From Novels to Interactive Agent Societies for Story Creation
Yiting Ran | Xintao Wang | Tian Qiu | Jiaqing Liang | Yanghua Xiao | Deqing Yang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Recent advances in large language models (LLMs) have enabled social simulation through multi-agent systems. Prior efforts focus on agent societies created from scratch, assigning agents with newly defined personas. However, simulating established fictional worlds and characters remain largely underexplored, despite its significant practical value. In this paper, we introduce BookWorld, a comprehensive system for constructing and simulating book-based multi-agent societies. BookWorld’s design covers comprehensive real-world intricacies, including diverse and dynamic characters, fictional worldviews, geographical constraints and changes, e.t.c. BookWorld enables diverse applications including story generation, interactive games and social simulation, offering novel ways to extend and explore beloved fictional works. Through extensive experiments, we demonstrate that BookWorld generates creative, high-quality stories while maintaining fidelity to the source books, surpassing previous methods with a win rate of 75.36%. The code and demo of this paper can be found at the project page: https://bookworld2025.github.io/.

2024

pdf bib
Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data
Yiting Ran | Xintao Wang | Rui Xu | Xinfeng Yuan | Jiaqing Liang | Yanghua Xiao | Deqing Yang
Findings of the Association for Computational Linguistics: EMNLP 2024

Role-playing agents (RPA) have been a popular application area for large language models (LLMs), attracting significant interest from both industry and academia. While existing RPAs well portray the characters’ knowledge and tones, they face challenges in capturing their minds, especially for small role-playing language models (RPLMs). In this paper, we propose to enhance RPLMs via personality-indicative data. Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters. Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.