2023
pdf
abs
The economic trade-offs of large language models: A case study
Kristen Howell
|
Gwen Christian
|
Pavel Fomitchov
|
Gitit Kehat
|
Julianne Marzulla
|
Leanne Rolston
|
Jadin Tredup
|
Ilana Zimmerman
|
Ethan Selfridge
|
Joseph Bradley
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)
Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. With their ability to handle large context windows, Large Language Models (LLMs) are a natural fit for this use case. However, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model’s utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM — prompt engineering, fine-tuning, and knowledge distillation — using feedback from the brand’s customer service agents. We find that the usability of a model’s responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space.
2021
pdf
abs
Neural Metaphor Detection with Visibility Embeddings
Gitit Kehat
|
James Pustejovsky
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics
We present new results for the problem of sequence metaphor labeling, using the recently developed Visibility Embeddings. We show that concatenating such embeddings to the input of a BiLSTM obtains consistent and significant improvements at almost no cost, and we present further improved results when visibility embeddings are combined with BERT.
2020
pdf
abs
Improving Neural Metaphor Detection with Visual Datasets
Gitit Kehat
|
James Pustejovsky
Proceedings of the Twelfth Language Resources and Evaluation Conference
We present new results on Metaphor Detection by using text from visual datasets. Using a straightforward technique for sampling text from Vision-Language datasets, we create a data structure we term a visibility word embedding. We then combine these embeddings in a relatively simple BiLSTM module augmented with contextualized word representations (ELMo), and show improvement over previous state-of-the-art approaches that use more complex neural network architectures and richer linguistic features, for the task of verb classification.
2017
pdf
abs
Integrating Vision and Language Datasets to Measure Word Concreteness
Gitit Kehat
|
James Pustejovsky
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
We present and take advantage of the inherent visualizability properties of words in visual corpora (the textual components of vision-language datasets) to compute concreteness scores for words. Our simple method does not require hand-annotated concreteness score lists for training, and yields state-of-the-art results when evaluated against concreteness scores lists and previously derived scores, as well as when used for metaphor detection.
2016
pdf
abs
The Development of Multimodal Lexical Resources
James Pustejovsky
|
Tuan Do
|
Gitit Kehat
|
Nikhil Krishnaswamy
Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces (GramLex)
Human communication is a multimodal activity, involving not only speech and written expressions, but intonation, images, gestures, visual clues, and the interpretation of actions through perception. In this paper, we describe the design of a multimodal lexicon that is able to accommodate the diverse modalities that present themselves in NLP applications. We have been developing a multimodal semantic representation, VoxML, that integrates the encoding of semantic, visual, gestural, and action-based features associated with linguistic expressions.