Akshay Kumar
2021
CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over Images
Shailaja Keyur Sampat
|
Akshay Kumar
|
Yezhou Yang
|
Chitta Baral
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Most existing research on visual question answering (VQA) is limited to information explicitly present in an image or a video. In this paper, we take visual understanding to a higher level where systems are challenged to answer questions that involve mentally simulating the hypothetical consequences of performing specific actions in a given scenario. Towards that end, we formulate a vision-language question answering task based on the CLEVR (Johnson et. al., 2017) dataset. We then modify the best existing VQA methods and propose baseline solvers for this task. Finally, we motivate the development of better vision-language models by providing insights about the capability of diverse architectures to perform joint reasoning over image-text modality. Our dataset setup scripts and codes will be made publicly available at https://github.com/shailaja183/clevr_hyp.
2015
Analyzing Newspaper Crime Reports for Identification of Safe Transit Paths
Vasu Sharma
|
Rajat Kulshreshtha
|
Puneet Singh
|
Nishant Agrawal
|
Akshay Kumar
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop
Search
Co-authors
- Shailaja Keyur Sampat 1
- Yezhou Yang 1
- Chitta Baral 1
- Vasu Sharma 1
- Rajat Kulshreshtha 1
- show all...