Summarizing Lengthy Questions

Tatsuya Ishigaki, Hiroya Takamura, Manabu Okumura


Abstract
In this research, we propose the task of question summarization. We first analyzed question-summary pairs extracted from a Community Question Answering (CQA) site, and found that a proportion of questions cannot be summarized by extractive approaches but requires abstractive approaches. We created a dataset by regarding the question-title pairs posted on the CQA site as question-summary pairs. By using the data, we trained extractive and abstractive summarization models, and compared them based on ROUGE scores and manual evaluations. Our experimental results show an abstractive method using an encoder-decoder model with a copying mechanism achieves better scores for both ROUGE-2 F-measure and the evaluations by human judges.
Anthology ID:
I17-1080
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
792–800
Language:
URL:
https://aclanthology.org/I17-1080
DOI:
Bibkey:
Cite (ACL):
Tatsuya Ishigaki, Hiroya Takamura, and Manabu Okumura. 2017. Summarizing Lengthy Questions. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 792–800, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Summarizing Lengthy Questions (Ishigaki et al., IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/I17-1080.pdf