Deyu Li


Jointly Identifying Rhetoric and Implicit Emotions via Multi-Task Learning
Xin Chen | Zhen Hai | Deyu Li | Suge Wang | Dian Wang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

Emotion Inference in Multi-Turn Conversations with Addressee-Aware Module and Ensemble Strategy
Dayu Li | Xiaodan Zhu | Yang Li | Suge Wang | Deyu Li | Jian Liao | Jianxing Zheng
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Emotion inference in multi-turn conversations aims to predict the participant’s emotion in the next upcoming turn without knowing the participant’s response yet, and is a necessary step for applications such as dialogue planning. However, it is a severe challenge to perceive and reason about the future feelings of participants, due to the lack of utterance information from the future. Moreover, it is crucial for emotion inference to capture the characteristics of emotional propagation in conversations, such as persistence and contagiousness. In this study, we focus on investigating the task of emotion inference in multi-turn conversations by modeling the propagation of emotional states among participants in the conversation history, and propose an addressee-aware module to automatically learn whether the participant keeps the historical emotional state or is affected by others in the next upcoming turn. In addition, we propose an ensemble strategy to further enhance the model performance. Empirical studies on three different benchmark conversation datasets demonstrate the effectiveness of the proposed model over several strong baselines.


Public Sentiment Drift Analysis Based on Hierarchical Variational Auto-encoder
Wenyue Zhang | Xiaoli Li | Yang Li | Suge Wang | Deyu Li | Jian Liao | Jianxing Zheng
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Detecting public sentiment drift is a challenging task due to sentiment change over time. Existing methods first build a classification model using historical data and subsequently detect drift if the model performs much worse on new data. In this paper, we focus on distribution learning by proposing a novel Hierarchical Variational Auto-Encoder (HVAE) model to learn better distribution representation, and design a new drift measure to directly evaluate distribution changes between historical data and new data.Our experimental results demonstrate that our proposed model achieves better results than three existing state-of-the-art methods.