Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization

Manling Li, Lingyu Zhang, Heng Ji, Richard J. Radke


Abstract
Transcripts of natural, multi-person meetings differ significantly from documents like news articles, which can make Natural Language Generation models for generating summaries unfocused. We develop an abstractive meeting summarizer from both videos and audios of meeting recordings. Specifically, we propose a multi-modal hierarchical attention across three levels: segment, utterance and word. To narrow down the focus into topically-relevant segments, we jointly model topic segmentation and summarization. In addition to traditional text features, we introduce new multi-modal features derived from visual focus of attention, based on the assumption that the utterance is more important if the speaker receives more attention. Experiments show that our model significantly outperforms the state-of-the-art with both BLEU and ROUGE measures.
Anthology ID:
P19-1210
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2190–2196
Language:
URL:
https://aclanthology.org/P19-1210
DOI:
10.18653/v1/P19-1210
Bibkey:
Cite (ACL):
Manling Li, Lingyu Zhang, Heng Ji, and Richard J. Radke. 2019. Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2190–2196, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization (Li et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/remove-xml-comments/P19-1210.pdf