Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information

Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei Li, Chengyang Huang, Xu Sun

[How to correct problems with metadata yourself]


Abstract
Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms. Previous work focuses on automatic commenting based solely on textual content. However, in real-scenarios, online articles usually contain multiple modal contents. For instance, graphic news contains plenty of images in addition to text. Contents other than text are also vital because they are not only more attractive to the reader but also may provide critical information. To remedy this, we propose a new task: cross-model automatic commenting (CMAC), which aims to make comments by integrating multiple modal contents. We construct a large-scale dataset for this task and explore several representative methods. Going a step further, an effective co-attention model is presented to capture the dependency between textual and visual information. Evaluation results show that our proposed model can achieve better performance than competitive baselines.
Anthology ID:
P19-1257
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2680–2686
Language:
URL:
https://aclanthology.org/P19-1257
DOI:
10.18653/v1/P19-1257
Bibkey:
Cite (ACL):
Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei Li, Chengyang Huang, and Xu Sun. 2019. Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2680–2686, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information (Yang et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/P19-1257.pdf
Video:
 https://preview.aclanthology.org/teach-a-man-to-fish/P19-1257.mp4
Code
 lancopku/CMAC
Data
Cross-Modal Comments Dataset