Hefan Zhang
2025
Communication Makes Perfect: Persuasion Dataset Construction via Multi-LLM Communication
Weicheng Ma
|
Hefan Zhang
|
Ivory Yang
|
Shiyu Ji
|
Joice Chen
|
Farnoosh Hashemi
|
Shubham Mohole
|
Ethan Gearey
|
Michael Macy
|
Saeed Hassanpour
|
Soroush Vosoughi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) have shown proficiency in generating persuasive dialogue, yet concerns about the fluency and sophistication of their outputs persist. This paper presents a multi-LLM communication framework designed to enhance the generation of persuasive data automatically. This framework facilitates the efficient production of high-quality, diverse linguistic content with minimal human oversight. Through extensive evaluations, we demonstrate that the generated data excels in naturalness, linguistic diversity, and the strategic use of persuasion, even in complex scenarios involving social taboos. The framework also proves adept at generalizing across novel contexts. Our results highlight the framework’s potential to significantly advance research in both computational and social science domains concerning persuasive communication.
2023
Improving Syntactic Probing Correctness and Robustness with Control Tasks
Weicheng Ma
|
Brian Wang
|
Hefan Zhang
|
Lili Wang
|
Rolando Coto-Solano
|
Saeed Hassanpour
|
Soroush Vosoughi
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic features. However, the probing methods are usually biased by the PLMs’ memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-label-matching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic features and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic features.
Search
Fix data
Co-authors
- Saeed Hassanpour 2
- Weicheng Ma 2
- Soroush Vosoughi 2
- Joice Chen 1
- Rolando Coto-Solano 1
- show all...