2024
pdf
abs
(Chat)GPT v BERT Dawn of Justice for Semantic Change Detection
Francesco Periti
|
Haim Dubossarsky
|
Nina Tahmasebi
Findings of the Association for Computational Linguistics: EACL 2024
In the universe of Natural Language Processing, Transformer-based language models like BERT and (Chat)GPT have emerged as lexical superheroes with great power to solve open research problems. In this paper, we specifically focus on the temporal problem of semantic change, and evaluate their ability to solve two diachronic extensions of the Word-in-Context (WiC) task: TempoWiC and HistoWiC. In particular, we investigate the potential of a novel, off-the-shelf technology like ChatGPT (and GPT) 3.5 compared to BERT, which represents a family of models that currently stand as the state-of-the-art for modeling semantic change. Our experiments represent the first attempt to assess the use of (Chat)GPT for studying semantic change. Our results indicate that ChatGPT performs significantly worse than the foundational GPT version. Furthermore, our results demonstrate that (Chat)GPT achieves slightly lower performance than BERT in detecting long-term changes but performs significantly worse in detecting short-term changes.
pdf
bib
abs
Computational modeling of semantic change
Pierluigi Cassotti
|
Francesco Periti
|
Stefano de Pascale
|
Haim Dubossarsky
|
Nina Tahmasebi
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts
Languages change constantly over time, influenced by social, technological, cultural and political factors that affect how people express themselves. In particular, words can undergo the process of semantic change, which can be subtle and significantly impact the interpretation of texts. For example, the word terrific used to mean ‘causing terror’ and was as such synonymous to terrifying. Nowadays, speakers use the word in the sense of ‘excessive’ and even ‘amazing’. In Historical Linguistics, tools and methods have been developed to analyse this phenomenon, including systematic categorisations of the types of change, the causes and the mechanisms underlying the different types of change. However, traditional linguistic methods, while informative, are often based on small, carefully curated samples. Thanks to the availability of both large diachronic corpora, the computational means to model word meaning unsupervised, and evaluation benchmarks, we are seeing an increasing interest in the computational modelling of semantic change. This is evidenced by the increasing number of publications in this new domain as well as the organisation of initiatives and events related to this topic, such as four editions of the International Workshop on Computational Approaches to Historical Language Change LChange1, and several evaluation campaigns (Schlechtweg et al., 2020a; Basile et al., 2020b; Kutuzov et al.; Zamora-Reina et al., 2022).
2023
pdf
bib
Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change
Nina Tahmasebi
|
Syrielle Montariol
|
Haim Dubossarsky
|
Andrey Kutuzov
|
Simon Hengchen
|
David Alfter
|
Francesco Periti
|
Pierluigi Cassotti
Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change
2022
pdf
abs
What is Done is Done: an Incremental Approach to Semantic Shift Detection
Francesco Periti
|
Alfio Ferrara
|
Stefano Montanelli
|
Martin Ruskov
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
Contextual word embedding techniques for semantic shift detection are receiving more and more attention. In this paper, we present What is Done is Done (WiDiD), an incremental approach to semantic shift detection based on incremental clustering techniques and contextual embedding methods to capture the changes over the meanings of a target word along a diachronic corpus. In WiDiD, the word contexts observed in the past are consolidated as a set of clusters that constitute the “memory” of the word meanings observed so far. Such a memory is exploited as a basis for subsequent word observations, so that the meanings observed in the present are stratified over the past ones.