2024
pdf
abs
Label-Efficient Model Selection for Text Generation
Shir Ashury Tahan
|
Ariel Gera
|
Benjamin Sznajder
|
Leshem Choshen
|
Liat Ein-Dor
|
Eyal Shnarch
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Model selection for a given target task can be costly, as it may entail extensive annotation of the quality of outputs of different models. We introduce DiffUse, an efficient method to make an informed decision between candidate text generation models based on preference annotations. DiffUse reduces the required amount of annotations, thus saving valuable time and resources in performing evaluation.DiffUse intelligently selects instances by clustering embeddings that represent the semantic differences between model outputs. Thus, it is able to identify a subset of examples that are more informative for preference decisions. Our method is model-agnostic, and can be applied to any text generation model for selecting between models, prompts and configurations. Moreover, we propose a practical iterative approach for dynamically determining how many instances to annotate. In a series of experiments over hundreds of model pairs, we demonstrate that DiffUse can dramatically reduce the required number of annotations – by up to 75% – while maintaining high evaluation reliability.
2023
pdf
abs
The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers
Ariel Gera
|
Roni Friedman
|
Ofir Arviv
|
Chulaka Gunasekara
|
Benjamin Sznajder
|
Noam Slonim
|
Eyal Shnarch
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Applying language models to natural language processing tasks typically relies on the representations in the final model layer, as intermediate hidden layer representations are presumed to be less informative. In this work, we argue that due to the gradual improvement across model layers, additional information can be gleaned from the contrast between higher and lower layers during inference. Specifically, in choosing between the probable next token predictions of a generative model, the predictions of lower layers can be used to highlight which candidates are best avoided. We propose a novel approach that utilizes the contrast between layers to improve text generation outputs, and show that it mitigates degenerative behaviors of the model in open-ended generation, significantly improving the quality of generated texts. Furthermore, our results indicate that contrasting between model layers at inference time can yield substantial benefits to certain aspects of general language model capabilities, more effectively extracting knowledge during inference from a given set of model parameters.
2021
pdf
Summary Grounded Conversation Generation
Chulaka Gunasekara
|
Guy Feigenblat
|
Benjamin Sznajder
|
Sachindra Joshi
|
David Konopnicki
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
pdf
abs
TWEETSUMM - A Dialog Summarization Dataset for Customer Service
Guy Feigenblat
|
Chulaka Gunasekara
|
Benjamin Sznajder
|
Sachindra Joshi
|
David Konopnicki
|
Ranit Aharonov
Findings of the Association for Computational Linguistics: EMNLP 2021
In a typical customer service chat scenario, customers contact a support center to ask for help or raise complaints, and human agents try to solve the issues. In most cases, at the end of the conversation, agents are asked to write a short summary emphasizing the problem and the proposed solution, usually for the benefit of other agents that may have to deal with the same customer or issue. The goal of the present article is advancing the automation of this task. We introduce the first large scale, high quality, customer care dialog summarization dataset with close to 6500 human annotated summaries. The data is based on real-world customer support dialogs and includes both extractive and abstractive summaries. We also introduce a new unsupervised, extractive summarization method specific to dialogs.
pdf
abs
Using Question Answering Rewards to Improve Abstractive Summarization
Chulaka Gunasekara
|
Guy Feigenblat
|
Benjamin Sznajder
|
Ranit Aharonov
|
Sachindra Joshi
Findings of the Association for Computational Linguistics: EMNLP 2021
Neural abstractive summarization models have drastically improved in the recent years. However, the summaries generated by these models generally suffer from issues such as: not capturing the critical facts in source documents, and containing facts that are inconsistent with the source documents. In this work, we present a general framework to train abstractive summarization models to alleviate such issues. We first train a sequence-to-sequence model to summarize documents, and then further train this model in a Reinforcement Learning setting with question-answering based rewards. We evaluate the summaries generated by the this framework using multiple automatic measures and human judgements. The experimental results show that the question-answering rewards can be used as a general framework to improve neural abstractive summarization. Particularly, the results from human evaluations show that the summaries generated by our approach is preferred over 30% of the time over the summaries generated by general abstractive summarization models.
2019
pdf
bib
abs
Financial Event Extraction Using Wikipedia-Based Weak Supervision
Liat Ein-Dor
|
Ariel Gera
|
Orith Toledo-Ronen
|
Alon Halfon
|
Benjamin Sznajder
|
Lena Dankin
|
Yonatan Bilu
|
Yoav Katz
|
Noam Slonim
Proceedings of the Second Workshop on Economics and Natural Language Processing
Extraction of financial and economic events from text has previously been done mostly using rule-based methods, with more recent works employing machine learning techniques. This work is in line with this latter approach, leveraging relevant Wikipedia sections to extract weak labels for sentences describing economic events. Whereas previous weakly supervised approaches required a knowledge-base of such events, or corresponding financial figures, our approach requires no such additional data, and can be employed to extract economic events related to companies which are not even mentioned in the training data.
pdf
abs
Argument Invention from First Principles
Yonatan Bilu
|
Ariel Gera
|
Daniel Hershcovich
|
Benjamin Sznajder
|
Dan Lahav
|
Guy Moshkowich
|
Anael Malet
|
Assaf Gavron
|
Noam Slonim
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Competitive debaters often find themselves facing a challenging task – how to debate a topic they know very little about, with only minutes to prepare, and without access to books or the Internet? What they often do is rely on ”first principles”, commonplace arguments which are relevant to many topics, and which they have refined in past debates. In this work we aim to explicitly define a taxonomy of such principled recurring arguments, and, given a controversial topic, to automatically identify which of these arguments are relevant to the topic. As far as we know, this is the first time that this approach to argument invention is formalized and made explicit in the context of NLP. The main goal of this work is to show that it is possible to define such a taxonomy. While the taxonomy suggested here should be thought of as a ”first attempt” it is nonetheless coherent, covers well the relevant topics and coincides with what professional debaters actually argue in their speeches, and facilitates automatic argument invention for new topics.
2018
pdf
abs
Learning Concept Abstractness Using Weak Supervision
Ella Rabinovich
|
Benjamin Sznajder
|
Artem Spector
|
Ilya Shnayderman
|
Ranit Aharonov
|
David Konopnicki
|
Noam Slonim
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
We introduce a weakly supervised approach for inferring the property of abstractness of words and expressions in the complete absence of labeled data. Exploiting only minimal linguistic clues and the contextual usage of a concept as manifested in textual data, we train sufficiently powerful classifiers, obtaining high correlation with human labels. The results imply the applicability of this approach to additional properties of concepts, additional languages, and resource-scarce scenarios.
2017
pdf
abs
Unsupervised corpus–wide claim detection
Ran Levy
|
Shai Gretz
|
Benjamin Sznajder
|
Shay Hummel
|
Ranit Aharonov
|
Noam Slonim
Proceedings of the 4th Workshop on Argument Mining
Automatic claim detection is a fundamental argument mining task that aims to automatically mine claims regarding a topic of consideration. Previous works on mining argumentative content have assumed that a set of relevant documents is given in advance. Here, we present a first corpus– wide claim detection framework, that can be directly applied to massive corpora. Using simple and intuitive empirical observations, we derive a claim sentence query by which we are able to directly retrieve sentences in which the prior probability to include topic-relevant claims is greatly enhanced. Next, we employ simple heuristics to rank the sentences, leading to an unsupervised corpus–wide claim detection system, with precision that outperforms previously reported results on the task of claim detection given relevant documents and labeled data.