2024
pdf
abs
TESS: Text-to-Text Self-Conditioned Simplex Diffusion
Rabeeh Karimi Mahabadi
|
Hamish Ivison
|
Jaesung Tae
|
James Henderson
|
Iz Beltagy
|
Matthew Peters
|
Arman Cohan
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various continuous domains. However, applying continuous diffusion models to natural language remains challenging due to its discrete nature and the need for a large number of diffusion steps to generate text, making diffusion-based generation expensive.In this work, we propose Text-to-text Self-conditioned Simplex Diffusion (TESS), a text diffusion model that is fully non-autoregressive, employs a new form of self-conditioning, and applies the diffusion process on the logit simplex space rather than the learned embedding space.Through extensive experiments on natural language understanding and generation tasks including summarization, text simplification, paraphrase generation, and question generation, we demonstrate that TESS outperforms state-of-the-art non-autoregressive models, requires fewer diffusion steps with minimal drop in performance, and is competitive with pretrained autoregressive sequence-to-sequence models.
2023
pdf
abs
Enhancing Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
Linyong Nan
|
Yilun Zhao
|
Weijin Zou
|
Narutatsu Ri
|
Jaesung Tae
|
Ellen Zhang
|
Arman Cohan
|
Dragomir Radev
Findings of the Association for Computational Linguistics: EMNLP 2023
In-context learning (ICL) has emerged as a new approach to various natural language processing tasks, utilizing large language models (LLMs) to make predictions based on context that has been supplemented with a few examples or task-specific instructions. In this paper, we aim to extend this method to question answering tasks that utilize structured knowledge sources, and improve Text-to-SQL systems by exploring various prompt design strategies for employing LLMs. We conduct a systematic investigation into different demonstration selection methods and optimal instruction formats for prompting LLMs in the Text-to-SQL task. Our approach involves leveraging the syntactic structure of an example’s SQL query to retrieve demonstrations, and we demonstrate that pursuing both diversity and similarity in demonstration selection leads to enhanced performance. Furthermore, we show that LLMs benefit from database-related knowledge augmentations. Our most effective strategy outperforms the state-of-the-art system by 2.5 points (Execution Accuracy) and the best fine-tuned system by 5.1 points on the Spider dataset. These results highlight the effectiveness of our approach in adapting LLMs to the Text-to-SQL task, and we present an analysis of the factors contributing to the success of our strategy.
2022
pdf
abs
What Language Model to Train if You Have One Million GPU Hours?
Teven Le Scao
|
Thomas Wang
|
Daniel Hesslow
|
Stas Bekman
|
M Saiful Bari
|
Stella Biderman
|
Hady Elsahar
|
Niklas Muennighoff
|
Jason Phang
|
Ofir Press
|
Colin Raffel
|
Victor Sanh
|
Sheng Shen
|
Lintang Sutawika
|
Jaesung Tae
|
Zheng Xin Yong
|
Julien Launay
|
Iz Beltagy
Findings of the Association for Computational Linguistics: EMNLP 2022
The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone.In the process of building BLOOM–the Big Science Large Open-science Open-access Multilingual language model–our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget.Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization.In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience.
pdf
abs
Surfer100: Generating Surveys From Web Resources, Wikipedia-style
Irene Li
|
Alex Fabbri
|
Rina Kawamura
|
Yixin Liu
|
Xiangru Tang
|
Jaesung Tae
|
Chang Shen
|
Sally Ma
|
Tomoe Mizutani
|
Dragomir Radev
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Fast-developing fields such as Artificial Intelligence (AI) often outpace the efforts of encyclopedic sources such as Wikipedia, which either do not completely cover recently-introduced topics or lack such content entirely. As a result, methods for automatically producing content are valuable tools to address this information overload. We show that recent advances in pretrained language modeling can be combined for a two-stage extractive and abstractive approach for Wikipedia lead paragraph generation. We extend this approach to generate longer Wikipedia-style summaries with sections and examine how such methods struggle in this application through detailed studies with 100 reference human-collected surveys. This is the first study on utilizing web resources for long Wikipedia-style summaries to the best of our knowledge.
pdf
abs
You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings
Zeerak Talat
|
Aurélie Névéol
|
Stella Biderman
|
Miruna Clinciu
|
Manan Dey
|
Shayne Longpre
|
Sasha Luccioni
|
Maraim Masoud
|
Margaret Mitchell
|
Dragomir Radev
|
Shanya Sharma
|
Arjun Subramonian
|
Jaesung Tae
|
Samson Tan
|
Deepak Tunuguntla
|
Oskar Van Der Wal
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Evaluating bias, fairness, and social impact in monolingual language models is a difficult task. This challenge is further compounded when language modeling occurs in a multilingual context. Considering the implication of evaluation biases for large multilingual language models, we situate the discussion of bias evaluation within a wider context of social scientific research with computational work. We highlight three dimensions of developing multilingual bias evaluation frameworks: (1) increasing transparency through documentation, (2) expanding targets of bias beyond gender, and (3) addressing cultural differences that exist between languages. We further discuss the power dynamics and consequences of training large language models and recommend that researchers remain cognizant of the ramifications of developing such technologies.