Can Language Models Capture Human Writing Preferences for Domain-Specific Text Summarization?
Jingbao Luo, Ming Liu, Ran Liu, Yongpan Sheng, Xin Hu, Gang Li, WupengNjust WupengNjust
Abstract
With the popularity of large language models and their high-quality text generation capabilities, researchers are using them as auxiliary tools for text summary writing. Although summaries generated by these large language models are smooth and capture key information sufficiently, the quality of their output depends on the prompt, and the generated text is somewhat procedural to a certain extent. We construct LecSumm to verify whether language models truly capture human writing preferences, in which we recruit 200 college students to write summaries for lecture notes on ten different machine-learning topics and analyze writing preferences in real-world human summaries through the dimensions of length, content depth, tone & style, and summary format. We define the method of capturing human writing preferences by language models as finetuning pre-trained models with data and designing prompts to optimize the output of large language models. The results of translating the analyzed human writing preferences into prompts and conducting experiments show that both models still fail to capture human writing preferences effectively. Our LecSumm dataset brings new challenges to finetuned and prompt-based large language models on the task of human-centered text summarization.- Anthology ID:
- 2025.findings-acl.315
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6073–6091
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.315/
- DOI:
- Cite (ACL):
- Jingbao Luo, Ming Liu, Ran Liu, Yongpan Sheng, Xin Hu, Gang Li, and WupengNjust WupengNjust. 2025. Can Language Models Capture Human Writing Preferences for Domain-Specific Text Summarization?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 6073–6091, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Can Language Models Capture Human Writing Preferences for Domain-Specific Text Summarization? (Luo et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.315.pdf