Shweti Mahajan
2024
Automatic Pair Construction for Contrastive Post-training
Canwen Xu
|
Corby Rosset
|
Ethan Chau
|
Luciano Corro
|
Shweti Mahajan
|
Julian McAuley
|
Jennifer Neville
|
Ahmed Awadallah
|
Nikhil Rao
Findings of the Association for Computational Linguistics: NAACL 2024
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we propose an automatic way to construct contrastive data for LLM, using preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continuing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from “easier” pairs and transitioning to “harder” ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, our automatic contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to outperform ChatGPT.
2022
Lexi: Self-Supervised Learning of the UI Language
Pratyay Banerjee
|
Shweti Mahajan
|
Kushal Arora
|
Chitta Baral
|
Oriana Riva
Findings of the Association for Computational Linguistics: EMNLP 2022
Humans can learn to operate the user interface (UI) of an application by reading an instruction manual or how-to guide. Along with text, these resources include visual content such as UI screenshots and images of application icons referenced in the text. We explore how to leverage this data to learn generic visio-linguistic representations of UI screens and their components. These representations are useful in many real applications, such as accessibility, voice navigation, and task automation. Prior UI representation models rely on UI metadata (UI trees and accessibility labels), which is often missing, incompletely defined, or not accessible. We avoid such a dependency, and propose Lexi, a pre-trained vision and language model designed to handle the unique features of UI screens, including their text richness and context sensitivity. To train Lexi we curate the UICaption dataset consisting of 114k UI images paired with descriptions of their functionality. We evaluate Lexi on four tasks: UI action entailment, instruction-based UI image retrieval, grounding referring expressions, and UI entity recognition.
Search
Co-authors
- Ahmed Awadallah 1
- Canwen Xu 1
- Chitta Baral 1
- Corby Rosset 1
- Ethan Chau 1
- show all...