Hamed Khanpour


2018

pdf
Fine-Grained Emotion Detection in Health-Related Online Posts
Hamed Khanpour | Cornelia Caragea
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Detecting fine-grained emotions in online health communities provides insightful information about patients’ emotional states. However, current computational approaches to emotion detection from health-related posts focus only on identifying messages that contain emotions, with no emphasis on the emotion type, using a set of handcrafted features. In this paper, we take a step further and propose to detect fine-grained emotion types from health-related posts and show how high-level and abstract features derived from deep neural networks combined with lexicon-based features can be employed to detect emotions.

2017

pdf
Identifying Empathetic Messages in Online Health Communities
Hamed Khanpour | Cornelia Caragea | Prakhar Biyani
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Empathy captures one’s ability to correlate with and understand others’ emotional states and experiences. Messages with empathetic content are considered as one of the main advantages for joining online health communities due to their potential to improve people’s moods. Unfortunately, to this date, no computational studies exist that automatically identify empathetic messages in online health communities. We propose a combination of Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) networks, and show that the proposed model outperforms each individual model (CNN and LSTM) as well as several baselines.

2016

pdf
Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network
Hamed Khanpour | Nishitha Guntakandla | Rodney Nielsen
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

In this study, we applied a deep LSTM structure to classify dialogue acts (DAs) in open-domain conversations. We found that the word embeddings parameters, dropout regularization, decay rate and number of layers are the parameters that have the largest effect on the final system accuracy. Using the findings of these experiments, we trained a deep LSTM network that outperforms the state-of-the-art on the Switchboard corpus by 3.11%, and MRDA by 2.2%.