Peter Izsak


2022

pdf
Transformer Language Models without Positional Encodings Still Learn Positional Information
Adi Haviv | Ori Ram | Ofir Press | Peter Izsak | Omer Levy
Findings of the Association for Computational Linguistics: EMNLP 2022

Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings. However, we show that LMs without any explicit positional encoding are still competitive with standard models and that this phenomenon is robust across different datasets, model sizes, and sequence lengths.Probing experiments reveal that such models acquire an implicit notion of absolute positions throughout the network, effectively compensating for the missing information.We conjecture that causal attention enables the model to infer the number of predecessors that each token can attend to, thereby approximating its absolute position.Our findings indicate that causal LMs might derive positional awareness not only from the explicit positioning mechanism but also from the effects of the causal mask.

2021

pdf
How to Train BERT with an Academic Budget
Peter Izsak | Moshe Berchansky | Omer Levy
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

While large language models a la BERT are used ubiquitously in NLP, pretraining them is considered a luxury that only a few well-funded industry labs can afford. How can one train such models with a more modest budget? We present a recipe for pretraining a masked language model in 24 hours using a single low-end deep learning server. We demonstrate that through a combination of software optimizations, design choices, and hyperparameter tuning, it is possible to produce models that are competitive with BERT-base on GLUE tasks at a fraction of the original pretraining cost.

2020

pdf
Exploring the Boundaries of Low-Resource BERT Distillation
Moshe Wasserblat | Oren Pereg | Peter Izsak
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing

In recent years, large pre-trained models have demonstrated state-of-the-art performance in many of NLP tasks. However, the deployment of these models on devices with limited resources is challenging due to the models’ large computational consumption and memory requirements. Moreover, the need for a considerable amount of labeled training data also hinders real-world deployment scenarios. Model distillation has shown promising results for reducing model size, computational load and data efficiency. In this paper we test the boundaries of BERT model distillation in terms of model compression, inference efficiency and data scarcity. We show that classification tasks that require the capturing of general lexical semantics can be successfully distilled by very simple and efficient models and require relatively small amount of labeled training data. We also show that the distillation of large pre-trained models is more effective in real-life scenarios where limited amounts of labeled training are available.

2018

pdf
SetExpander: End-to-end Term Set Expansion Based on Multi-Context Term Embeddings
Jonathan Mamou | Oren Pereg | Moshe Wasserblat | Ido Dagan | Yoav Goldberg | Alon Eirew | Yael Green | Shira Guskin | Peter Izsak | Daniel Korat
Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations

We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class. SetExpander implements an iterative end-to end workflow for term set expansion. It enables users to easily select a seed set of terms, expand it, view the expanded set, validate it, re-expand the validated set and store it, thus simplifying the extraction of domain-specific fine-grained semantic classes. SetExpander has been used for solving real-life use cases including integration in an automated recruitment system and an issues and defects resolution system. A video demo of SetExpander is available at https://drive.google.com/open?id=1e545bB87Autsch36DjnJHmq3HWfSd1Rv .

pdf
Term Set Expansion based NLP Architect by Intel AI Lab
Jonathan Mamou | Oren Pereg | Moshe Wasserblat | Alon Eirew | Yael Green | Shira Guskin | Peter Izsak | Daniel Korat
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class. SetExpander implements an iterative end-to-end workflow. It enables users to easily select a seed set of terms, expand it, view the expanded set, validate it, re-expand the validated set and store it, thus simplifying the extraction of domain-specific fine-grained semantic classes. SetExpander has been used successfully in real-life use cases including integration into an automated recruitment system and an issues and defects resolution system.