Kimihiro Hasegawa
2026
ProMQA-Assembly: Multimodal Procedural QA Dataset on Assembly
Kimihiro Hasegawa | Wiradee Imrattanatrai | Masaki Asada | Susan E. Holm | Yuran Wang | Xuanang Zhou | Ken Fukuda | Teruko Mitamura
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Kimihiro Hasegawa | Wiradee Imrattanatrai | Masaki Asada | Susan E. Holm | Yuran Wang | Xuanang Zhou | Ken Fukuda | Teruko Mitamura
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Assistants on assembly tasks show great potential to benefit humans ranging from helping with everyday tasks to interacting in industrial settings. However, evaluation resources in assembly activities are underexplored. To foster system development, we propose a new multimodal QA evaluation dataset on assembly activities. Our dataset, ProMQA-Assembly, consists of 646 QA pairs that require multimodal understanding of human activity videos and their instruction manuals in an online-style manner. For cost effectiveness in the data creation, we adopt a semi-automated QA annotation approach, where LLMs generate candidate QA pairs and humans verify them. We further improve QA generation by integrating fine-grained action labels to diversify question types. Additionally, we create 81 instruction task graphs for our target assembly tasks. These newly created task graphs are used in our benchmarking experiment, as well as in facilitating the human verification process. With our dataset, we benchmark models, including competitive proprietary multimodal models. We find that ProMQA-Assembly contains challenging multimodal questions, where reasoning models showcase promising results. We believe our new evaluation dataset contributes to the further development of procedural-activity assistants.
VDAct 2.0: Scaling Video-Grounded Dialogue for Event-driven Activity Understanding with LLM-Assisted Filtering
Wiradee Imrattanatrai | Masaki Asada | Kimihiro Hasegawa | Ken Fukuda | Teruko Mitamura
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Wiradee Imrattanatrai | Masaki Asada | Kimihiro Hasegawa | Ken Fukuda | Teruko Mitamura
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present VDAct 2.0, an enhanced benchmark for video-grounded dialogue that builds upon the original VDAct by expanding dialogue coverage and introducing a scalable LLM-assisted filtering pipeline to ensure high-quality, grounded QA pairs. VDAct 2.0 comprises 6,356 human-annotated dialogues with a total of 63,958 turns, grounded in 2,975 household activity videos, with undesirable dialogue turns systematically identified and removed. To achieve this, we design a trigger-based quality framework and calibrate a panel of high-agreement LLMs through human-in-the-loop calibration, allowing scalable QA-turn-level filtering. We benchmark a wide range of pretrained and fine-tuned models, both open-source and proprietary, across standard text generation metrics and LLM-based evaluations. The results highlight both recent advances and remaining challenges in video-grounded dialogue modeling, positioning VDAct 2.0 as a high-fidelity testbed for evaluating and advancing multimodal reasoning in interactive settings.
2025
ProMQA: Question Answering Dataset for Multimodal Procedural Activity Understanding
Kimihiro Hasegawa | Wiradee Imrattanatrai | Zhi-Qi Cheng | Masaki Asada | Susan Holm | Yuran Wang | Ken Fukuda | Teruko Mitamura
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Kimihiro Hasegawa | Wiradee Imrattanatrai | Zhi-Qi Cheng | Masaki Asada | Susan Holm | Yuran Wang | Ken Fukuda | Teruko Mitamura
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Multimodal systems have great potential to assist humans in procedural activities, where people follow instructions to achieve their goals. Despite diverse application scenarios, systems are typically evaluated on traditional classification tasks, e.g., action recognition or temporal action localization. In this paper, we present a novel evaluation dataset, ProMQA, to measure the advancement of systems in application-oriented scenarios. ProMQA consists of 401 multimodal procedural QA pairs on user recording of procedural activities, i.e., cooking, coupled with their corresponding instruction. For QA annotation, we take a cost-effective human-LLM collaborative approach, where the existing annotation is augmented with LLM-generated QA pairs that are later verified by humans. We then provide the benchmark results to set the baseline performance on ProMQA. Our experiment reveals a significant gap between human performance and that of current systems, including competitive proprietary multimodal models. We hope our dataset sheds light on new aspects of models’ multimodal understanding capabilities.
2021
Cross-document Event Identity via Dense Annotation
Adithya Pratapa | Zhengzhong Liu | Kimihiro Hasegawa | Linwei Li | Yukari Yamakawa | Shikun Zhang | Teruko Mitamura
Proceedings of the 25th Conference on Computational Natural Language Learning
Adithya Pratapa | Zhengzhong Liu | Kimihiro Hasegawa | Linwei Li | Yukari Yamakawa | Shikun Zhang | Teruko Mitamura
Proceedings of the 25th Conference on Computational Natural Language Learning
In this paper, we study the identity of textual events from different documents. While the complex nature of event identity is previously studied (Hovy et al., 2013), the case of events across documents is unclear. Prior work on cross-document event coreference has two main drawbacks. First, they restrict the annotations to a limited set of event types. Second, they insufficiently tackle the concept of event identity. Such annotation setup reduces the pool of event mentions and prevents one from considering the possibility of quasi-identity relations. We propose a dense annotation approach for cross-document event coreference, comprising a rich source of event mentions and a dense annotation effort between related document pairs. To this end, we design a new annotation workflow with careful quality control and an easy-to-use annotation interface. In addition to the links, we further collect overlapping event contexts, including time, location, and participants, to shed some light on the relation between identity decisions and context. We present an open-access dataset for cross-document event coreference, CDEC-WN, collected from English Wikinews and open-source our annotation toolkit to encourage further research on cross-document tasks.