Jarvis Jin
2025
Improving Neutral Point-of-View Generation with Data- and Parameter-Efficient RL
Jessica Hoffmann
|
Christiane Ahlheim
|
Zac Yu
|
Aria Walfrand
|
Jarvis Jin
|
Marie Tano
|
Ahmad Beirami
|
Erin MacMurray van Liemt
|
Nithum Thain
|
Hakim Sidahmed
|
Lucas Dixon
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The paper shows that parameter-efficient reinforcement learning (PE-RL) is a highly effective training regime to improve large language models’ (LLMs) ability to answer queries on sensitive topics with a Neutral Point of View (NPOV), i.e. to provide significantly more informative, diverse and impartial answers. This is shown by evaluating PE-RL and multiple strong baselines—including LoRA finetuning (strongest baseline), SFT and RLHF. PE-RL not only improves on overall NPOV quality compared to the strongest baseline (97.06% → 99.08%), but also scores much higher on features linguists identify as key to separating good answers from the best answers (60.25% → 85.21% for presence of supportive details, 68.74% → 91.43% for absence of oversimplification). A qualitative analysis corroborates this. Finally, our evaluation finds no statistical differences between results on topics that appear in the training dataset and those on separated evaluation topics, which provides strong evidence that our approach to training PE-RL exhibits very effective out of topic generalization. To enable the study, and enable further future studies we also release the dataset, SHQ-NPOV, and provide a methodology to create such datasets through iterative rounds of human peer-critique and annotator training.
Search
Fix author
Co-authors
- Christiane Ahlheim 1
- Ahmad Beirami 1
- Lucas Dixon 1
- Jessica Hoffmann 1
- Hakim Sidahmed 1
- show all...