Performance of BERT on Persuasion for Good

Saumajit Saha, Kanika Kalra, Manasi Patwardhan, Shirish Karande


Abstract
We consider the task of automatically classifying the persuasion strategy employed by an utterance in a dialog. We base our work on the PERSUASION-FOR-GOOD dataset, which is composed of conversations between crowdworkers trying to convince each other to make donations to a charity. Currently, the best known performance on this dataset, for classification of persuader’s strategy, is not derived by employing pretrained language models like BERT. We observe that a straightforward fine-tuning of BERT does not provide significant performance gain. Nevertheless, nonuniformly sampling to account for the class imbalance and a cost function enforcing a hierarchical probabilistic structure on the classes provides an absolute improvement of 10.79% F1 over the previously reported results. On the same dataset, we replicate the framework for classifying the persuadee’s response.
Anthology ID:
2021.icon-main.38
Volume:
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Editors:
Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
313–323
Language:
URL:
https://aclanthology.org/2021.icon-main.38
DOI:
Bibkey:
Cite (ACL):
Saumajit Saha, Kanika Kalra, Manasi Patwardhan, and Shirish Karande. 2021. Performance of BERT on Persuasion for Good. In Proceedings of the 18th International Conference on Natural Language Processing (ICON), pages 313–323, National Institute of Technology Silchar, Silchar, India. NLP Association of India (NLPAI).
Cite (Informal):
Performance of BERT on Persuasion for Good (Saha et al., ICON 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/2021.icon-main.38.pdf