Andrew Wilson

Also published as: Andrew T. Wilson


2024

pdf bib
Calibration-Tuning: Teaching Large Language Models to Know What They Don’t Know
Sanyam Kapoor | Nate Gruver | Manley Roberts | Arka Pal | Samuel Dooley | Micah Goldblum | Andrew Wilson
Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024)

Large language models are increasingly deployed for high-stakes decision making, for example in financial and medical applications. In such applications, it is imperative that we be able to estimate our confidence in the answers output by a language model in order to assess risks. Although we can easily compute the probability assigned by a language model to the sequence of tokens that make up an answer, we cannot easily compute the probability of the answer itself, which could be phrased in numerous ways.While other works have engineered ways of assigning such probabilities to LLM outputs, a key problem remains: existing language models are poorly calibrated, often confident when they are wrong or unsure when they are correct. In this work, we devise a protocol called *calibration tuning* for finetuning LLMs to output calibrated probabilities. Calibration-tuned models demonstrate superior calibration performance compared to existing language models on a variety of question-answering tasks, including open-ended generation, without affecting accuracy. We further show that this ability transfers to new domains outside of the calibration-tuning train set.

2023

pdf
Automated Few-Shot Classification with Instruction-Finetuned Language Models
Rami Aly | Xingjian Shi | Kaixiang Lin | Aston Zhang | Andrew Wilson
Findings of the Association for Computational Linguistics: EMNLP 2023

A particularly successful class of approaches for few-shot learning combines language models with prompts - hand-crafted task descriptions that complement data samples. However, designing prompts by hand for each task commonly requires domain knowledge and substantial guesswork. We observe, in the context of classification tasks, that instruction finetuned language models are remarkably robust towards some dimensions of a prompt’s design. We subsequently propose a simple method to eliminate the need for handcrafted prompts, named AuT-Few. This approach consists of (i) a prompt retrieval module that selects suitable task instructions from the instruction-tuning knowledge base, and (ii) the generation of two distinct, semantically meaningful, class descriptions and a selection mechanism via cross-validation. Over 12 datasets, spanning 8 classification tasks, we show that AuT-Few outperforms current state-of-the-art few-shot learning methods. Moreover, AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark. Notably, these results are achieved without task-specific handcrafted prompts on unseen tasks.

2018

pdf bib
Probabilistic FastText for Multi-Sense Word Embeddings
Ben Athiwaratkun | Andrew Wilson | Anima Anandkumar
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share the “strength” across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FastText, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-of-art performance on benchmarks that measure ability to discern different meanings. Thus, our model is the first to achieve best of both the worlds: multi-sense representations while having enriched semantics on rare words.

2017

pdf
Multimodal Word Distributions
Ben Athiwaratkun | Andrew Wilson
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment.

2010

pdf
Term Weighting Schemes for Latent Dirichlet Allocation
Andrew T. Wilson | Peter A. Chew
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2006

pdf bib
Measuring MWE Compositionality Using Semantic Annotation
Scott S.L. Piao | Paul Rayson | Olga Mudraya | Andrew Wilson | Roger Garside
Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties

2003

pdf
Extracting Multiword Expressions with A Semantic Tagger
Scott S. L. Piao | Paul Rayson | Dawn Archer | Andrew Wilson | Tony McEnery
Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment