John J. Nay
Also published as: John J Nay
2025
Language Models can Subtly Deceive Without Lying: A Case Study on Strategic Phrasing in Legislation
Atharvan Dogra
|
Krishna Pillutla
|
Ameet Deshpande
|
Ananya B. Sai
|
John J Nay
|
Tanmay Rajpurohit
|
Ashwin Kalyan
|
Balaraman Ravindran
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We explore the ability of large language models (LLMs) to engage in subtle deception through strategically phrasing and intentionally manipulating information. This harmful behavior can be hard to detect, unlike blatant lying or unintentional hallucination. We build a simple testbed mimicking a legislative environment where a corporate lobbyist module is proposing amendments to bills that benefit a specific company while evading identification of this benefactor. We use real-world legislative bills matched with potentially affected companies to ground these interactions. Our results show that LLM lobbyists can draft subtle phrasing to avoid such identification by strong LLM-based detectors. Further optimization of the phrasing using LLM-based re-planning and re-sampling increases deception rates by up to 40 percentage points.Our human evaluations to verify the quality of deceptive generations and their retention of self-serving intent show significant coherence with our automated metrics and also help in identifying certain strategies of deceptive phrasing.This study highlights the risk of LLMs’ capabilities for strategic phrasing through seemingly neutral language to attain self-serving goals. This calls for future research to uncover and protect against such subtle deception.
2016
Gov2Vec: Learning Distributed Representations of Institutions and Their Legal Text
John J. Nay
Proceedings of the First Workshop on NLP and Computational Social Science
Search
Fix author
Co-authors
- Ameet Deshpande 1
- Atharvan Dogra 1
- Ashwin Kalyan 1
- Krishna Pillutla 1
- Tanmay Rajpurohit 1
- show all...