Task-Oriented Query Reformulation with Reinforcement Learning

Rodrigo Nogueira, Kyunghyun Cho

[How to correct problems with metadata yourself]


Abstract
Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upper-bound performance of a model in a particular environment and verify that there is still large room for improvements.
Anthology ID:
D17-1061
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
574–583
Language:
URL:
https://aclanthology.org/D17-1061
DOI:
10.18653/v1/D17-1061
Bibkey:
Cite (ACL):
Rodrigo Nogueira and Kyunghyun Cho. 2017. Task-Oriented Query Reformulation with Reinforcement Learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 574–583, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Task-Oriented Query Reformulation with Reinforcement Learning (Nogueira & Cho, EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/D17-1061.pdf
Video:
 https://preview.aclanthology.org/teach-a-man-to-fish/D17-1061.mp4
Code
 nyu-dl/QueryReformulator +  additional community code