@inproceedings{choi-etal-2018-convolutional,
    title = "Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data",
    author = "Choi, Woo Yong  and
      Song, Kyu Ye  and
      Lee, Chan Woo",
    editor = "Zadeh, Amir  and
      Liang, Paul Pu  and
      Morency, Louis-Philippe  and
      Poria, Soujanya  and
      Cambria, Erik  and
      Scherer, Stefan",
    booktitle = "Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-{HML})",
    month = jul,
    year = "2018",
    address = "Melbourne, Australia",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W18-3304/",
    doi = "10.18653/v1/W18-3304",
    pages = "28--34",
    abstract = "Emotion recognition has become a popular topic of interest, especially in the field of human computer interaction. Previous works involve unimodal analysis of emotion, while recent efforts focus on multimodal emotion recognition from vision and speech. In this paper, we propose a new method of learning about the hidden representations between just speech and text data using convolutional attention networks. Compared to the shallow model which employs simple concatenation of feature vectors, the proposed attention model performs much better in classifying emotion from speech and text data contained in the CMU-MOSEI dataset."
}Markdown (Informal)
[Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data](https://preview.aclanthology.org/iwcs-25-ingestion/W18-3304/) (Choi et al., ACL 2018)
ACL