Kohei Uehara
2026
DEJIMA: A Novel Large-scale Japanese Dataset for Image Captioning and Visual Question Answering
Toshiki Katsube | Fukuhara Taiga | Kenichiro Ando | Yusuke Mukuta | Kohei Uehara | Tatsuya Harada
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Toshiki Katsube | Fukuhara Taiga | Kenichiro Ando | Yusuke Mukuta | Kohei Uehara | Tatsuya Harada
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Vision-and-Language (V&L) models depend on large-scale, high-quality datasets, yet most resources are English-centric, and existing Japanese V&L datasets face a fundamental trade-off: manually annotated corpora offer quality but limited scale, translated datasets introduce unnatural phrasing and cultural bias, and web-crawled collections achieve scale but suffer from noise and poor grounding. To resolve this trade-off, we propose DEJIMA, a novel pipeline whose key idea is detection-guided LLM refinement: object detection first extracts visually verifiable evidence (labels and bounding boxes), then an LLM generates or refines Japanese text conditioned on this evidence, ensuring both factual grounding and linguistic naturalness without costly human annotation. Using this pipeline, we build two resources: an image–caption dataset (DEJIMA-Cap) and a VQA dataset (DEJIMA-VQA), each containing approximately 3.88M image–text pairs—over 20 times larger than existing Japanese V&L datasets. Human evaluations demonstrate that DEJIMA achieves substantially higher Japaneseness and linguistic naturalness than translation- or annotation-based baselines, while maintaining factual correctness comparable to human-annotated corpora. Models trained on DEJIMA show consistent improvements across multiple Japanese multimodal benchmarks, confirming that culturally grounded, large-scale resources play a key role in enhancing model performance. All pipeline components are commercially licensed, and we publicly release the dataset and metadata to support further research and applications. Our project page is available at https://mil-tokyo.github.io/DEJIMA-dataset/.
2024
Content-Specific Humorous Image Captioning Using Incongruity Resolution Chain-of-Thought
Kohtaro Tanaka | Kohei Uehara | Lin Gu | Yusuke Mukuta | Tatsuya Harada
Findings of the Association for Computational Linguistics: NAACL 2024
Kohtaro Tanaka | Kohei Uehara | Lin Gu | Yusuke Mukuta | Tatsuya Harada
Findings of the Association for Computational Linguistics: NAACL 2024
Although automated image captioning methods have benefited considerably from the development of large language models (LLMs), generating humorous captions is still a challenging task. Humorous captions generated by humans are unique to the image and reflect the content of the image. However, captions generated using previous captioning models tend to be generic. Therefore, we propose incongruity-resolution chain-of-thought (IRCoT) as a novel prompting framework that creates content-specific resolutions from fine details extracted from an image. Furthermore, we integrate logit bias and negative sampling to suppress the output of generic resolutions. The results of experiments with GPT4-V demonstrate that our proposed framework effectively generated humorous captions tailored to the content of specific input images.
2020
Unsupervised Keyword Extraction for Full-Sentence VQA
Kohei Uehara | Tatsuya Harada
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
Kohei Uehara | Tatsuya Harada
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e. keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.