Yuki Tagawa
2020
Reinforcement Learning with Imbalanced Dataset for Data-to-Text Medical Report Generation
Toru Nishino
|
Ryota Ozaki
|
Yohei Momoki
|
Tomoki Taniguchi
|
Ryuji Kano
|
Norihisa Nakano
|
Yuki Tagawa
|
Motoki Taniguchi
|
Tomoko Ohkuma
|
Keigo Nakamura
Findings of the Association for Computational Linguistics: EMNLP 2020
Automated generation of medical reports that describe the findings in the medical images helps radiologists by alleviating their workload. Medical report generation system should generate correct and concise reports. However, data imbalance makes it difficult to train models accurately. Medical datasets are commonly imbalanced in their finding labels because incidence rates differ among diseases; moreover, the ratios of abnormalities to normalities are significantly imbalanced. We propose a novel reinforcement learning method with a reconstructor to improve the clinical correctness of generated reports to train the data-to-text module with a highly imbalanced dataset. Moreover, we introduce a novel data augmentation strategy for reinforcement learning to additionally train the model on infrequent findings. From the perspective of a practical use, we employ a Two-Stage Medical Report Generator (TS-MRGen) for controllable report generation from input images. TS-MRGen consists of two separated stages: an image diagnosis module and a data-to-text module. Radiologists can modify the image diagnosis module results to control the reports that the data-to-text module generates. We conduct an experiment with two medical datasets to assess the data-to-text module and the entire two-stage model. Results demonstrate that the reports generated by our model describe the findings in the input image more correctly.
2019
Relation Prediction for Unseen-Entities Using Entity-Word Graphs
Yuki Tagawa
|
Motoki Taniguchi
|
Yasuhide Miura
|
Tomoki Taniguchi
|
Tomoko Ohkuma
|
Takayuki Yamamoto
|
Keiichi Nemoto
Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)
Knowledge graphs (KGs) are generally used for various NLP tasks. However, as KGs still miss some information, it is necessary to develop Knowledge Graph Completion (KGC) methods. Most KGC researches do not focus on the Out-of-KGs entities (Unseen-entities), we need a method that can predict the relation for the entity pairs containing Unseen-entities to automatically add new entities to the KGs. In this study, we focus on relation prediction and propose a method to learn entity representations via a graph structure that uses Seen-entities, Unseen-entities and words as nodes created from the descriptions of all entities. In the experiments, our method shows a significant improvement in the relation prediction for the entity pairs containing Unseen-entities.
Search
Co-authors
- Tomoki Taniguchi 2
- Motoki Taniguchi 2
- Tomoko Ohkuma 2
- Toru Nishino 1
- Ryota Ozaki 1
- show all...