Abstract
Automatic medical image report generation has drawn growing attention due to its potential to alleviate radiologists’ workload. Existing work on report generation often trains encoder-decoder networks to generate complete reports. However, such models are affected by data bias (e.g. label imbalance) and face common issues inherent in text generation models (e.g. repetition). In this work, we focus on reporting abnormal findings on radiology images; instead of training on complete radiology reports, we propose a method to identify abnormal findings from the reports in addition to grouping them with unsupervised clustering and minimal rules. We formulate the task as cross-modal retrieval and propose Conditional Visual-Semantic Embeddings to align images and fine-grained abnormal findings in a joint embedding space. We demonstrate that our method is able to retrieve abnormal findings and outperforms existing generation models on both clinical correctness and text generation metrics.- Anthology ID:
- 2020.findings-emnlp.176
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1954–1960
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.176
- DOI:
- 10.18653/v1/2020.findings-emnlp.176
- Cite (ACL):
- Jianmo Ni, Chun-Nan Hsu, Amilcare Gentili, and Julian McAuley. 2020. Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1954–1960, Online. Association for Computational Linguistics.
- Cite (Informal):
- Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on Chest X-rays (Ni et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2020.findings-emnlp.176.pdf
- Data
- CheXpert