2023
pdf
abs
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding
Andrea Burns
|
Krishna Srinivasan
|
Joshua Ainslie
|
Geoff Brown
|
Bryan Plummer
|
Kate Saenko
|
Jianmo Ni
|
Mandy Guo
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Webpages have been a rich, scalable resource for vision-language and language only tasks. Yet only pieces of webpages are kept in existing datasets: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data left underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage suite (WikiWeb2M) containing 2M pages with all of the associated image, text, and structure data. We verify its utility on three generative tasks: page description generation, section summarization, and contextual image captioning. We design a novel attention mechanism Prefix Global, which selects the most relevant image and text content as global tokens to attend to the rest of the webpage for context. By using page structure to separate such tokens, it performs better than full attention with lower computational complexity. Extensive experiments show that the new data in WikiWeb2M improves task performance compared to prior work.
2022
pdf
abs
Transforming Sequence Tagging Into A Seq2Seq Task
Karthik Raman
|
Iftekhar Naim
|
Jiecao Chen
|
Kazuma Hashimoto
|
Kiran Yalasangi
|
Krishna Srinivasan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Pretrained, large, generative language models (LMs) have had great success in a wide range of sequence tagging and structured prediction tasks. Casting a sequence tagging task as a Seq2Seq one requires deciding the formats of the input and output sequences. However, we lack a principled understanding of the trade-offs associated with these formats (such as the effect on model accuracy, sequence length, multilingual generalization, hallucination). In this paper, we rigorously study different formats one could use for casting input text sentences and their output labels into the input and target (i.e., output) of a Seq2Seq model. Along the way, we introduce a new format, which we show to to be both simpler and more effective. Additionally the new format demonstrates significant gains in the multilingual settings – both zero-shot transfer learning and joint training. Lastly, we find that the new format is more robust and almost completely devoid of hallucination – an issue we find common in existing formats. With well over a 1000 experiments studying 14 different formats, over 7 diverse public benchmarks – including 3 multilingual datasets spanning 7 languages – we believe our findings provide a strong empirical basis in understanding how we should tackle sequence tagging tasks.
pdf
abs
QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation
Krishna Srinivasan
|
Karthik Raman
|
Anupam Samanta
|
Lingrui Liao
|
Luca Bertelli
|
Michael Bendersky
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large Language Models (LLMs) have shown impressive results on a variety of text understanding tasks. Search queries though pose a unique challenge, given their short-length and lack of nuance or context. Complicated feature engineering efforts do not always lead to downstream improvements as their performance benefits may be offset by increased complexity of knowledge distillation. Thus, in this paper we make the following contributions: (1) We demonstrate that Retrieval Augmentation of queries provides LLMs with valuable additional context enabling improved understanding. While Retrieval Augmentation typically increases latency of LMs (thus hurting distillation efficacy), (2) we provide a practical and effective way of distilling Retrieval Augmentation LLMs. Specifically, we use a novel two-stage distillation approach that allows us to carry over the gains of retrieval augmentation, without suffering the increased compute typically associated with it. (3) We demonstrate the benefits of the proposed approach (QUILL) on a billion-scale, real-world query understanding system resulting in huge gains. Via extensive experiments, including on public benchmarks, we believe this work offers a recipe for practical use of retrieval-augmented query understanding.
pdf
bib
Proceedings of the Workshop on Multilingual Multimodal Learning
Emanuele Bugliarello
|
Kai-Wei Cheng
|
Desmond Elliott
|
Spandana Gella
|
Aishwarya Kamath
|
Liunian Harold Li
|
Fangyu Liu
|
Jonas Pfeiffer
|
Edoardo Maria Ponti
|
Krishna Srinivasan
|
Ivan Vulić
|
Yinfei Yang
|
Da Yin
Proceedings of the Workshop on Multilingual Multimodal Learning
2021
pdf
abs
MURAL: Multimodal, Multitask Representations Across Languages
Aashi Jain
|
Mandy Guo
|
Krishna Srinivasan
|
Ting Chen
|
Sneha Kudugunta
|
Chao Jia
|
Yinfei Yang
|
Jason Baldridge
Findings of the Association for Computational Linguistics: EMNLP 2021
Both image-caption pairs and translation pairs provide the means to learn deep representations of and connections between languages. We use both types of pairs in MURAL (MUltimodal, MUltitask Representations Across Languages), a dual encoder that solves two tasks: 1) image-text matching and 2) translation pair matching. By incorporating billions of translation pairs, MURAL extends ALIGN (Jia et al.)–a state-of-the-art dual encoder learned from 1.8 billion noisy image-text pairs. When using the same encoders, MURAL’s performance matches or exceeds ALIGN’s cross-modal retrieval performance on well-resourced languages across several datasets. More importantly, it considerably improves performance on under-resourced languages, showing that text-text learning can overcome a paucity of image-caption examples for these languages. On the Wikipedia Image-Text dataset, for example, MURAL-base improves zero-shot mean recall by 8.1% on average for eight under-resourced languages and by 6.8% on average when fine-tuning. We additionally show that MURAL’s text representations cluster not only with respect to genealogical connections but also based on areal linguistics, such as the Balkan Sprachbund.