Vaden Masrani
2024
GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation
Mohsen Gholami
|
Mohammad Akbari
|
Tianxi Hu
|
Vaden Masrani
|
Z. Wang
|
Yong Zhang
Findings of the Association for Computational Linguistics: NAACL 2024
Knowledge distillation from LLMs is essential for the efficient deployment of language models. Prior works have proposed data generation using LLMs for preparing distilled models. We argue that generating data with LLMs is prone to sampling mainly from the center of original content distribution. This limitation hinders the distilled model from learning the true underlying data distribution and to forget the tails of the distributions (samples with lower probability). To this end, we propose GOLD, a task-agnostic data generation and knowledge distillation framework, which employs an iterative out-of-distribution-guided feedback mechanism for the LLM. As a result, the generated data improves the generalizability of distilled models. An energy-based OOD evaluation approach is also introduced to deal with noisy generated data. Our extensive experiments on 10 different classification and sequence-to-sequence tasks in NLP show that GOLD respectively outperforms prior arts and the LLM with an average improvement of 5% and 14%. We will also show that the proposed method is applicable to less explored and novel tasks. Code is available in the Appendix.
2017
Detecting Dementia through Retrospective Analysis of Routine Blog Posts by Bloggers with Dementia
Vaden Masrani
|
Gabriel Murray
|
Thalia Field
|
Giuseppe Carenini
BioNLP 2017
We investigate if writers with dementia can be automatically distinguished from those without by analyzing linguistic markers in written text, in the form of blog posts. We have built a corpus of several thousand blog posts, some by people with dementia and others by people with loved ones with dementia. We use this dataset to train and test several machine learning methods, and achieve prediction performance at a level far above the baseline.
Generating and Evaluating Summaries for Partial Email Threads: Conversational Bayesian Surprise and Silver Standards
Jordon Johnson
|
Vaden Masrani
|
Giuseppe Carenini
|
Raymond Ng
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
We define and motivate the problem of summarizing partial email threads. This problem introduces the challenge of generating reference summaries for partial threads when human annotation is only available for the threads as a whole, particularly when the human-selected sentences are not uniformly distributed within the threads. We propose an oracular algorithm for generating these reference summaries with arbitrary length, and we are making the resulting dataset publicly available. In addition, we apply a recent unsupervised method based on Bayesian Surprise that incorporates background knowledge into partial thread summarization, extend it with conversational features, and modify the mechanism by which it handles redundancy. Experiments with our method indicate improved performance over the baseline for shorter partial threads; and our results suggest that the potential benefits of background knowledge to partial thread summarization should be further investigated with larger datasets.
Search
Co-authors
- Gabriel Murray 1
- Giuseppe Carenini 2
- Jordon Johnson 1
- Mohammad Akbari 1
- Mohsen Gholami 1
- show all...