Xinru Yan
2021
ResPer: Computationally Modelling Resisting Strategies in Persuasive Conversations
Ritam Dutt
|
Sayan Sinha
|
Rishabh Joshi
|
Surya Shekhar Chakraborty
|
Meredith Riggs
|
Xinru Yan
|
Haogang Bao
|
Carolyn Rose
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Modelling persuasion strategies as predictors of task outcome has several real-world applications and has received considerable attention from the computational linguistics community. However, previous research has failed to account for the resisting strategies employed by an individual to foil such persuasion attempts. Grounded in prior literature in cognitive and social psychology, we propose a generalised framework for identifying resisting strategies in persuasive conversations. We instantiate our framework on two distinct datasets comprising persuasion and negotiation conversations. We also leverage a hierarchical sequence-labelling neural architecture to infer the aforementioned resisting strategies automatically. Our experiments reveal the asymmetry of power roles in non-collaborative goal-directed conversations and the benefits accrued from incorporating resisting strategies on the final conversation outcome. We also investigate the role of different resisting strategies on the conversation outcome and glean insights that corroborate with past findings. We also make the code and the dataset of this work publicly available at https://github.com/americast/resper.
2019
Using Functional Schemas to Understand Social Media Narratives
Xinru Yan
|
Aakanksha Naik
|
Yohan Jo
|
Carolyn Rose
Proceedings of the Second Workshop on Storytelling
We propose a novel take on understanding narratives in social media, focusing on learning ”functional story schemas”, which consist of sets of stereotypical functional structures. We develop an unsupervised pipeline to extract schemas and apply our method to Reddit posts to detect schematic structures that are characteristic of different subreddits. We validate our schemas through human interpretation and evaluate their utility via a text classification task. Our experiments show that extracted schemas capture distinctive structural patterns in different subreddits, improving classification performance of several models by 2.4% on average. We also observe that these schemas serve as lenses that reveal community norms.
2017
Duluth at SemEval-2017 Task 6: Language Models in Humor Detection
Xinru Yan
|
Ted Pedersen
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)
This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs.
Search
Co-authors
- Carolyn Rose 2
- Aakanksha Naik 1
- Yohan Jo 1
- Ted Pedersen 1
- Ritam Dutt 1
- show all...