Abstract
The Covid-19 pandemic urged the scientific community to join efforts at an unprecedented scale, leading to faster than ever dissemination of data and results, which in turn motivated more research works. This paper presents and discusses information retrieval models aimed at addressing the challenge of searching the large number of publications that stem from these studies. The model presented, based on classical baselines followed by an interaction based neural ranking model, was evaluated and evolved within the TREC Covid challenge setting. Results on this dataset show that, when starting with a strong baseline, our light neural ranking model can achieve results that are comparable to other model architectures that use very large number of parameters.- Anthology ID:
- 2020.nlpcovid19-2.3
- Volume:
- Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020
- Month:
- December
- Year:
- 2020
- Address:
- Online
- Editors:
- Karin Verspoor, Kevin Bretonnel Cohen, Michael Conway, Berry de Bruijn, Mark Dredze, Rada Mihalcea, Byron Wallace
- Venue:
- NLP-COVID19
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- Language:
- URL:
- https://aclanthology.org/2020.nlpcovid19-2.3
- DOI:
- 10.18653/v1/2020.nlpcovid19-2.3
- Cite (ACL):
- Tiago Almeida and Sérgio Matos. 2020. Frugal neural reranking: evaluation on the Covid-19 literature. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.
- Cite (Informal):
- Frugal neural reranking: evaluation on the Covid-19 literature (Almeida & Matos, NLP-COVID19 2020)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2020.nlpcovid19-2.3.pdf
- Data
- CORD-19, TREC-COVID