AMRTVSumm: AMR-augmented Hierarchical Network for TV Transcript Summarization

Yilun Hua, Zhaoyuan Deng, Zhijie Xu


Abstract
This paper describes our AMRTVSumm system for the SummScreen datasets in the Automatic Summarization for Creative Writing shared task (Creative-Summ 2022). In order to capture the complicated entity interactions and dialogue structures in transcripts of TV series, we introduce a new Abstract Meaning Representation (AMR) (Banarescu et al., 2013), particularly designed to represent individual scenes in an episode. We also propose a new cross-level cross-attention mechanism to incorporate these scene AMRs into a hierarchical encoder-decoder baseline. On both the ForeverDreaming and TVMegaSite datasets of SummScreen, our system consistently outperforms the hierarchical transformer baseline. Compared with the state-of-the-art DialogLM (Zhong et al., 2021), our system still has a lower performance primarily because it is pretrained only on out-of-domain news data, unlike DialogLM, which uses extensive in-domain pretraining on dialogue and TV show data. Overall, our work suggests a promising direction to capture complicated long dialogue structures through graph representations and the need to combine graph representations with powerful pretrained language models.
Anthology ID:
2022.creativesumm-1.6
Volume:
Proceedings of The Workshop on Automatic Summarization for Creative Writing
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editor:
Kathleen Mckeown
Venue:
CreativeSumm
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
36–43
Language:
URL:
https://aclanthology.org/2022.creativesumm-1.6
DOI:
Bibkey:
Cite (ACL):
Yilun Hua, Zhaoyuan Deng, and Zhijie Xu. 2022. AMRTVSumm: AMR-augmented Hierarchical Network for TV Transcript Summarization. In Proceedings of The Workshop on Automatic Summarization for Creative Writing, pages 36–43, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
AMRTVSumm: AMR-augmented Hierarchical Network for TV Transcript Summarization (Hua et al., CreativeSumm 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2022.creativesumm-1.6.pdf