Comparison of Gender- and Speaker-adaptive Emotion Recognition

Maxim Sidorov, Stefan Ultes, Alexander Schmitt


Abstract
Deriving the emotion of a human speaker is a hard task, especially if only the audio stream is taken into account. While state-of-the-art approaches already provide good results, adaptive methods have been proposed in order to further improve the recognition accuracy. A recent approach is to add characteristics of the speaker, e.g., the gender of the speaker. In this contribution, we argue that adding information unique for each speaker, i.e., by using speaker identification techniques, improves emotion recognition simply by adding this additional information to the feature vector of the statistical classification algorithm. Moreover, we compare this approach to emotion recognition adding only the speaker gender being a non-unique speaker attribute. We justify this by performing adaptive emotion recognition using both gender and speaker information on four different corpora of different languages containing acted and non-acted speech. The final results show that adding speaker information significantly outperforms both adding gender information and solely using a generic speaker-independent approach.
Anthology ID:
L14-1286
Volume:
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Month:
May
Year:
2014
Address:
Reykjavik, Iceland
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
3476–3480
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/322_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Maxim Sidorov, Stefan Ultes, and Alexander Schmitt. 2014. Comparison of Gender- and Speaker-adaptive Emotion Recognition. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3476–3480, Reykjavik, Iceland. European Language Resources Association (ELRA).
Cite (Informal):
Comparison of Gender- and Speaker-adaptive Emotion Recognition (Sidorov et al., LREC 2014)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/322_Paper.pdf