pdf
bib
Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP)
Francesco Barbieri
|
Jose Camacho-Collados
|
Bhuwan Dhingra
|
Luis Espinosa-Anke
|
Elena Gribovskaya
|
Angeliki Lazaridou
|
Daniel Loureiro
|
Leonardo Neves
pdf
bib
abs
MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift
Chenyang Lyu
|
Yongxin Zhou
|
Tianbo Ji
In this paper, we present our system for the EvoNLP 2022 shared task Temporal Meaning Shift (TempoWiC). Different from the typically used discriminative model, we propose a generative approach based on pre-trained generation models. The basic architecture of our system is a seq2seq model where the input sequence consists of two documents followed by a question asking whether the meaning of target word changed or not, the target output sequence is a declarative sentence describing the meaning of target word changed or not. The experimental results on TempoWiC test set show that our best system (with time information) obtained an accuracy and Marco F-1 score of 68.09% and 62.59% respectively, which ranked 12th among all submitted systems. The results have shown the plausibility of using generation model for WiC tasks, meanwhile also indicate there’s still room for further improvement.
pdf
bib
abs
Using Deep Mixture-of-Experts to Detect Word Meaning Shift for TempoWiC
Ze Chen
|
Kangxu Wang
|
Zijian Cai
|
Jiewen Zheng
|
Jiarong He
|
Max Gao
|
Jason Zhang
This paper mainly describes the dma submission to the TempoWiC task, which achieves a macro-F1 score of 77.05% and attains the first place in this task. We first explore the impact of different pre-trained language models. Then we adopt data cleaning, data augmentation, and adversarial training strategies to enhance the model generalization and robustness. For further improvement, we integrate POS information and word semantic representation using a Mixture-of-Experts (MoE) approach. The experimental results show that MoE can overcome the feature overuse issue and combine the context, POS, and word semantic features well. Additionally, we use a model ensemble method for the final prediction, which has been proven effective by many research works.
pdf
abs
Using Two Losses and Two Datasets Simultaneously to Improve TempoWiC Accuracy
Mohammad Javad Pirhadi
|
Motahhare Mirzaei
|
Sauleh Eetemadi
WSD (Word Sense Disambiguation) is the task of identifying which sense of a word is meant in a sentence or other segment of text. Researchers have worked on this task (e.g. Pustejovsky, 2002) for years but it’s still a challenging one even for SOTA (state-of-the-art) LMs (language models). The new dataset, TempoWiC introduced by Loureiro et al. (2022b) focuses on the fact that words change over time. Their best baseline achieves 70.33% macro-F1. In this work, we use two different losses simultaneously. We also improve our model by using another similar dataset to generalize better. Our best configuration beats their best baseline by 4.23%.
pdf
abs
Class Incremental Learning for Intent Classification with Limited or No Old Data
Debjit Paul
|
Daniil Sorokin
|
Judith Gaspers
In this paper, we explore class-incremental learning for intent classification (IC) in a setting with limited old data available. IC is the task of mapping user utterances to their corresponding intents. Even though class-incremental learning without storing the old data yields high potential of reducing human and computational resources in industry NLP model releases, to the best of our knowledge, it hasn’t been studied for NLP classification tasks in the literature before. In this work, we compare several contemporary class-incremental learning methods, i.e., BERT warm start, L2, Elastic Weight Consolidation, RecAdam and Knowledge Distillation within two realistic class-incremental learning scenarios: one where only the previous model is assumed to be available, but no data corresponding to old classes, and one in which limited unlabeled data for old classes is assumed to be available. Our results indicate that among the investigated continual learning methods, Knowledge Distillation worked best for our class-incremental learning tasks, and adding limited unlabeled data helps the model in both adaptability and stability.
pdf
abs
CC-Top: Constrained Clustering for Dynamic Topic Discovery
Jann Goschenhofer
|
Pranav Ragupathy
|
Christian Heumann
|
Bernd Bischl
|
Matthias Aßenmacher
Research on multi-class text classification of short texts mainly focuses on supervised (transfer) learning approaches, requiring a finite set of pre-defined classes which is constant over time. This work explores deep constrained clustering (CC) as an alternative to supervised learning approaches in a setting with a dynamically changing number of classes, a task we introduce as dynamic topic discovery (DTD).We do so by using pairwise similarity constraints instead of instance-level class labels which allow for a flexible number of classes while exhibiting a competitive performance compared to supervised approaches. First, we substantiate this through a series of experiments and show that CC algorithms exhibit a predictive performance similar to state-of-the-art supervised learning algorithms while requiring less annotation effort. Second, we demonstrate the overclustering capabilities of deep CC for detecting topics in short text data sets in the absence of the ground truth class cardinality during model training. Third, we showcase that these capabilities can be leveraged for the DTD setting as a step towards dynamic learning over time and finally, we release our codebase to nurture further research in this area.
pdf
abs
HSE at TempoWiC: Detecting Meaning Shift in Social Media with Diachronic Language Models
Elizaveta Tukhtina
|
Kseniia Kashleva
|
Svetlana Vydrina
This paper describes our methods for temporal meaning shift detection, implemented during the TempoWiC shared task. We present two systems: with and without time span data usage. Our approaches are based on the language models fine-tuned for Twitter domain. Both systems outperformed all the competition’s baselines except TimeLMs-SIM. Our best submission achieved the macro-F1 score of 70.09% and took the 7th place. This result was achieved by using diachronic language models from the TimeLMs project.
pdf
abs
Leveraging time-dependent lexical features for offensive language detection
Barbara McGillivray
|
Malithi Alahapperuma
|
Jonathan Cook
|
Chiara Di Bonaventura
|
Albert Meroño-Peñuela
|
Gareth Tyson
|
Steven Wilson
We present a study on the integration of time-sensitive information in lexicon-based offensive language detection systems. Our focus is on Offenseval sub-task A, aimed at detecting offensive tweets. We apply a semantic change detection algorithm over a short time span of two years to detect words whose semantics has changed and we focus particularly on those words that acquired or lost an offensive meaning between 2019 and 2020. Using the output of this semantic change detection approach, we train an SVM classifier on the Offenseval 2019 training set. We build on the already competitive SINAI system submitted to Offenseval 2019 by adding new lexical features, including those that capture the change in usage of words and their association with emerging offensive usages. We discuss the challenges, opportunities and limitations of integrating semantic change detection in offensive language detection models. Our work draws attention to an often neglected aspect of offensive language, namely that the meanings of words are constantly evolving and that NLP systems that account for this change can achieve good performance even when not trained on the most recent training data.
pdf
abs
Temporal Word Meaning Disambiguation using TimeLMs
Mihir Godbole
|
Parth Dandavate
|
Aditya Kane
Meaning of words constantly change given the events in modern civilization. Large Language Models use word embeddings, which are often static and thus cannot cope with this semantic change. Thus, it is important to resolve ambiguity in word meanings. This paper is an effort in this direction, where we explore methods for word sense disambiguation for the EvoNLP shared task. We conduct rigorous ablations for two solutions to this problem. We see that an approach using time-aware language models helps this task. Furthermore, we explore possible future directions to this problem.