Emotion Recognition in Multi-Speaker Conversations through Speaker Identification, Knowledge Distillation, and Hierarchical Fusion

Li Xiao, Kotaro Funakoshi, Manabu Okumura


Abstract
Emotion recognition in multi-speaker conversations faces significant challenges due to speaker ambiguity and severe class imbalance. We propose a novel framework that addresses these issues through three key innovations: (1) a speaker identification module that leverages audio-visual synchronization to accurately identify the active speaker, (2) a knowledge distillation strategy that transfers superior textual emotion understanding to audio and visual modalities, and (3) hierarchical attention fusion with composite loss functions to handle class imbalance. Comprehensive evaluations on MELD and IEMOCAP datasets demonstrate superior performance, achieving 67.75% and 72.44% weighted F1 scores respectively, with particularly notable improvements on minority emotion classes.
Anthology ID:
2026.findings-eacl.212
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4091–4106
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.212/
DOI:
Bibkey:
Cite (ACL):
Li Xiao, Kotaro Funakoshi, and Manabu Okumura. 2026. Emotion Recognition in Multi-Speaker Conversations through Speaker Identification, Knowledge Distillation, and Hierarchical Fusion. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4091–4106, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Emotion Recognition in Multi-Speaker Conversations through Speaker Identification, Knowledge Distillation, and Hierarchical Fusion (Xiao et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.212.pdf
Checklist:
 2026.findings-eacl.212.checklist.pdf