Pradyumna Narayana


2021

pdf bib
Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation
Wanrong Zhu | Xin Wang | Tsu-Jui Fu | An Yan | Pradyumna Narayana | Kazoo Sone | Sugato Basu | William Yang Wang
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

One of the most challenging topics in Natural Language Processing (NLP) is visually-grounded language understanding and reasoning. Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates in real-life urban environments. With the lack of human-annotated instructions that illustrate the intricate urban scenes, outdoor VLN remains a challenging task to solve. In this paper, we introduce a Multimodal Text Style Transfer (MTST) learning approach and leverage external multimodal resources to mitigate data scarcity in outdoor navigation tasks. We first enrich the navigation data by transferring the style of the instructions generated by Google Maps API, then pre-train the navigator with the augmented external outdoor navigation dataset. Experimental results show that our MTST learning approach is model-agnostic, and our MTST approach significantly outperforms the baseline models on the outdoor VLN task, improving task completion rate by 8.7% relatively on the test set.

2020

pdf bib
Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
Wanrong Zhu | Xin Wang | Pradyumna Narayana | Kazoo Sone | Sugato Basu | William Yang Wang
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

A major challenge in visually grounded language generation is to build robust benchmark datasets and models that can generalize well in real-world settings. To do this, it is critical to ensure that our evaluation protocols are correct, and benchmarks are reliable. In this work, we set forth to design a set of experiments to understand an important but often ignored problem in visually grounded language generation: given that humans have different utilities and visual attention, how will the sample variance in multi-reference datasets affect the models’ performance? Empirically, we study several multi-reference datasets and corresponding vision-and-language tasks. We show that it is of paramount importance to report variance in experiments; that human-generated references could vary drastically in different datasets/tasks, revealing the nature of each task; that metric-wise, CIDEr has shown systematically larger variances than others. Our evaluations on reference-per-instance shed light on the design of reliable datasets in the future.

2017

pdf bib
Communicating and Acting: Understanding Gesture in Simulation Semantics
Nikhil Krishnaswamy | Pradyumna Narayana | Isaac Wang | Kyeongmin Rim | Rahul Bangar | Dhruva Patil | Gururaj Mulay | Ross Beveridge | Jaime Ruiz | Bruce Draper | James Pustejovsky
IWCS 2017 — 12th International Conference on Computational Semantics — Short papers

pdf bib
Creating Common Ground through Multimodal Simulations
James Pustejovsky | Nikhil Krishnaswamy | Bruce Draper | Pradyumna Narayana | Rahul Bangar
Proceedings of the IWCS workshop on Foundations of Situated and Multimodal Communication