Abstract
We present a new dataset for machine comprehension in the medical domain. Our dataset uses clinical case reports with around 100,000 gap-filling queries about these cases. We apply several baselines and state-of-the-art neural readers to the dataset, and observe a considerable gap in performance (20% F1) between the best human and machine readers. We analyze the skills required for successful answering and show how reader performance varies depending on the applicable skills. We find that inferences using domain knowledge and object tracking are the most frequently required skills, and that recognizing omitted information and spatio-temporal reasoning are the most difficult for the machines.- Anthology ID:
- N18-1140
- Volume:
- Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Marilyn Walker, Heng Ji, Amanda Stent
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1551–1563
- Language:
- URL:
- https://aclanthology.org/N18-1140
- DOI:
- 10.18653/v1/N18-1140
- Cite (ACL):
- Simon Šuster and Walter Daelemans. 2018. CliCR: a Dataset of Clinical Case Reports for Machine Reading Comprehension. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1551–1563, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- CliCR: a Dataset of Clinical Case Reports for Machine Reading Comprehension (Šuster & Daelemans, NAACL 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/N18-1140.pdf
- Code
- clips/clicr
- Data
- CliCR, BookTest, CBT, Children's Book Test, QUASAR, QUASAR-S, QUASAR-T, SQuAD, SciQ, Who-did-What