Angela Lin
2020
A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks
Angela Lin
|
Sudha Rao
|
Asli Celikyilmaz
|
Elnaz Nouri
|
Chris Brockett
|
Debadeepta Dey
|
Bill Dolan
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Many high-level procedural tasks can be decomposed into sequences of instructions that vary in their order and choice of tools. In the cooking domain, the web offers many, partially-overlapping, text and video recipes (i.e. procedures) that describe how to make the same dish (i.e. high-level task). Aligning instructions for the same dish across different sources can yield descriptive visual explanations that are far richer semantically than conventional textual instructions, providing commonsense insight into how real-world procedures are structured. Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions. To address these challenges, we use an unsupervised alignment algorithm that learns pairwise alignments between instructions of different recipes for the same dish. We then use a graph algorithm to derive a joint alignment between multiple text and multiple video recipes for the same dish. We release the Microsoft Research Multimodal Aligned Recipe Corpus containing ~150K pairwise alignments between recipes across 4262 dishes with rich commonsense information.
Search
Co-authors
- Sudha Rao 1
- Asli Celikyilmaz 1
- Elnaz Nouri 1
- Chris Brockett 1
- Debadeepta Dey 1
- show all...
Venues
- acl1