Abstract
A quote tweet enables users to share others’ content while adding their own commentary. In order to enhance public engagement through quote tweets, we investigate the task of generating popular quote tweets. This task aims to produce quote tweets that garner higher popularity, as indicated by increased likes, replies, and retweets. Despite the impressive language generation capabilities of large language models (LLMs), there has been limited research on how LLMs can effectively learn the popularity of text to better engage the public. Therefore, we introduce a novel approach called Response-augmented Popularity-Aligned Language Model (RePALM), which aligns language generation with popularity by leveraging insights from augmented auto-responses provided by readers. We utilize the Proximal Policy Optimization framework with a dual-reward mechanism to jointly optimize for the popularity of the quote tweet and its consistency with the auto-responses. In our experiments, we collected two datasets consisting of quote tweets containing external links and those referencing others’ tweets. Extensive results demonstrate the superiority of RePALM over advanced language models that do not incorporate response augmentation.- Anthology ID:
- 2024.findings-acl.570
- Volume:
- Findings of the Association for Computational Linguistics ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand and virtual meeting
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9566–9579
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.570
- DOI:
- Cite (ACL):
- Erxin Yu, Jing Li, and Chunpu Xu. 2024. RePALM: Popular Quote Tweet Generation via Auto-Response Augmentation. In Findings of the Association for Computational Linguistics ACL 2024, pages 9566–9579, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- RePALM: Popular Quote Tweet Generation via Auto-Response Augmentation (Yu et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.findings-acl.570.pdf