Bojun Jin
2025
A Multi-persona Framework for Argument Quality Assessment
Bojun Jin
|
Jianzhu Bao
|
Yufang Hou
|
Yang Sun
|
Yice Zhang
|
Huajie Wang
|
Bin Liang
|
Ruifeng Xu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Argument quality assessment faces inherent challenges due to its subjective nature, where different evaluators may assign varying quality scores for an argument based on personal perspectives. Although existing datasets collect opinions from multiple annotators to model subjectivity, most existing computational methods fail to consider multi-perspective evaluation. To address this issue, we propose MPAQ, a multi-persona framework for argument quality assessment that simulates diverse evaluator perspectives through large language models. It first dynamically generates targeted personas tailored to an input argument, then simulates each persona’s reasoning process to evaluate the argument quality from multiple perspectives. To effectively generate fine-grained quality scores, we develop a coarse-to-fine scoring strategy that first generates a coarse-grained integer score and then refines it into a fine-grained decimal score. Experiments on IBM-Rank-30k and IBM-ArgQ-5.3kArgs datasets demonstrate that MPAQ consistently outperforms strong baselines while providing comprehensive multi-perspective rationales.
Exploring Quality and Diversity in Synthetic Data Generation for Argument Mining
Jianzhu Bao
|
Yuqi Huang
|
Yang Sun
|
Wenya Wang
|
Yice Zhang
|
Bojun Jin
|
Ruifeng Xu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The advancement of Argument Mining (AM) is hindered by a critical bottleneck: the scarcity of structure-annotated datasets, which are expensive to create manually. Inspired by recent successes in synthetic data generation across various NLP tasks, this paper explores methodologies for LLMs to generate synthetic data for AM.We investigate two complementary synthesis perspectives: a quality-oriented synthesis approach, which employs structure-aware paraphrasing to preserve annotation quality, and a diversity-oriented synthesis approach, which generates novel argumentative texts with diverse topics and argument structures.Experiments on three datasets show that augmenting original training data with our synthetic data, particularly when combining both quality- and diversity-oriented instances, significantly enhances the performance of existing AM models, both in full-data and low-resource settings.Moreover, the positive correlation between synthetic data volume and model performance highlights the scalability of our methods.
Search
Fix author
Co-authors
- Jianzhu Bao 2
- Yang Sun 2
- Ruifeng Xu (徐睿峰) 2
- Yice Zhang 2
- Yufang Hou 1
- show all...