A Proposal: Interactively Learning to Summarise Timelines by Reinforcement Learning

Yuxuan Ye, Edwin Simpson


Abstract
Timeline Summarisation (TLS) aims to generate a concise, time-ordered list of events described in sources such as news articles. However, current systems do not provide an adequate way to adapt to new domains nor to focus on the aspects of interest to a particular user. Therefore, we propose a method for interactively learning abstractive TLS using Reinforcement Learning (RL). We define a compound reward function and use RL to fine-tune an abstractive Multi-document Summarisation (MDS) model, which avoids the need to train using reference summaries. One of the sub-reward functions will be learned interactively from user feedback to ensure the consistency between users’ demands and the generated timeline. The other sub-reward functions contribute to topical coherence and linguistic fluency. We plan experiments to evaluate whether our approach could generate accurate and precise timelines tailored for each user.
Anthology ID:
2021.internlp-1.4
Volume:
Proceedings of the First Workshop on Interactive Learning for Natural Language Processing
Month:
August
Year:
2021
Address:
Online
Editors:
Kianté Brantley, Soham Dan, Iryna Gurevych, Ji-Ung Lee, Filip Radlinski, Hinrich Schütze, Edwin Simpson, Lili Yu
Venue:
InterNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–31
Language:
URL:
https://aclanthology.org/2021.internlp-1.4
DOI:
10.18653/v1/2021.internlp-1.4
Bibkey:
Cite (ACL):
Yuxuan Ye and Edwin Simpson. 2021. A Proposal: Interactively Learning to Summarise Timelines by Reinforcement Learning. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing, pages 25–31, Online. Association for Computational Linguistics.
Cite (Informal):
A Proposal: Interactively Learning to Summarise Timelines by Reinforcement Learning (Ye & Simpson, InterNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2021.internlp-1.4.pdf