2023
pdf
abs
Unsupervised Semantic Variation Prediction using the Distribution of Sibling Embeddings
Taichi Aida
|
Danushka Bollegala
Findings of the Association for Computational Linguistics: ACL 2023
Languages are dynamic entities, where the meanings associated with words constantly change with time. Detecting the semantic variation of words is an important task for various NLP applications that must make time-sensitive predictions. Existing work on semantic variation prediction have predominantly focused on comparing some form of an averaged contextualised representation of a target word computed from a given corpus. However, some of the previously associated meanings of a target word can become obsolete over time (e.g. meaning of gay as happy), while novel usages of existing words are observed (e.g. meaning of cell as a mobile phone).We argue that mean representations alone cannot accurately capture such semantic variations and propose a method that uses the entire cohort of the contextualised embeddings of the target word, which we refer to as the sibling distribution. Experimental results on SemEval-2020 Task 1 benchmark dataset for semantic variation prediction show that our method outperforms prior work that consider only the mean embeddings, and is comparable to the current state-of-the-art. Moreover, a qualitative analysis shows that our method detects important semantic changes in words that are not captured by the existing methods.
pdf
abs
Can Word Sense Distribution Detect Semantic Changes of Words?
Xiaohang Tang
|
Yi Zhou
|
Taichi Aida
|
Procheta Sen
|
Danushka Bollegala
Findings of the Association for Computational Linguistics: EMNLP 2023
Semantic Change Detection of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation (WSD) methods associate ambiguous words with sense ids, depending on the context in which they occur. Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora. For this purpose, we use pretrained static sense embeddings to automatically annotate each occurrence of the target word in a corpus with a sense id. Next, we compute the distribution of sense ids of a target word in a given corpus. Finally, we use different divergence or distance measures to quantify the semantic change of the target word across the two given corpora. Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin.
pdf
abs
Swap and Predict – Predicting the Semantic Changes in Words across Corpora by Context Swapping
Taichi Aida
|
Danushka Bollegala
Findings of the Association for Computational Linguistics: EMNLP 2023
Meanings of words change over time and across domains. Detecting the semantic changes of words is an important task for various NLP applications that must make time-sensitive predictions. We consider the problem of predicting whether a given target word, w, changes its meaning between two different text corpora, 𝒞1 and 𝒞2. For this purpose, we propose Swapping-based Semantic Change Detection (SSCD), an unsupervised method that randomly swaps contexts between 𝒞1 and 𝒞2 where w occurs. We then look at the distribution of contextualised word embeddings of w, obtained from a pretrained masked language model (MLM), representing the meaning of w in its occurrence contexts in 𝒞1 and 𝒞2. Intuitively, if the meaning of w does not change between 𝒞1 and 𝒞2, we would expect the distributions of contextualised word embeddings of w to remain the same before and after this random swapping process. Despite its simplicity, we demonstrate that even by using pretrained MLMs without any fine-tuning, our proposed context swapping method accurately predicts the semantic changes of words in four languages (English, German, Swedish, and Latin) and across different time spans (over 50 years and about five years). Moreover, our method achieves significant performance improvements compared to strong baselines for the English semantic change prediction task. Source code is available at https://github.com/a1da4/svp-swap .
pdf
Construction of Evaluation Dataset for Japanese Lexical Semantic Change Detection
Zhidong Ling
|
Taichi Aida
|
Teruaki Oka
|
Mamoru Komachi
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2022
pdf
abs
Construction of a Quality Estimation Dataset for Automatic Evaluation of Japanese Grammatical Error Correction
Daisuke Suzuki
|
Yujin Takahashi
|
Ikumi Yamashita
|
Taichi Aida
|
Tosho Hirasawa
|
Michitaka Nakatsuji
|
Masato Mita
|
Mamoru Komachi
Proceedings of the Thirteenth Language Resources and Evaluation Conference
In grammatical error correction (GEC), automatic evaluation is considered as an important factor for research and development of GEC systems. Previous studies on automatic evaluation have shown that quality estimation models built from datasets with manual evaluation can achieve high performance in automatic evaluation of English GEC. However, quality estimation models have not yet been studied in Japanese, because there are no datasets for constructing quality estimation models. In this study, therefore, we created a quality estimation dataset with manual evaluation to build an automatic evaluation model for Japanese GEC. By building a quality estimation model using this dataset and conducting a meta-evaluation, we verified the usefulness of the quality estimation model for Japanese GEC.
2021
pdf
abs
Modeling Text using the Continuous Space Topic Model with Pre-Trained Word Embeddings
Seiichi Inoue
|
Taichi Aida
|
Mamoru Komachi
|
Manabu Asai
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
In this study, we propose a model that extends the continuous space topic model (CSTM), which flexibly controls word probability in a document, using pre-trained word embeddings. To develop the proposed model, we pre-train word embeddings, which capture the semantics of words and plug them into the CSTM. Intrinsic experimental results show that the proposed model exhibits a superior performance over the CSTM in terms of perplexity and convergence speed. Furthermore, extrinsic experimental results show that the proposed model is useful for a document classification task when compared with the baseline model. We qualitatively show that the latent coordinates obtained by training the proposed model are better than those of the baseline model.
pdf
A Comprehensive Analysis of PMI-based Models for Measuring Semantic Differences
Taichi Aida
|
Mamoru Komachi
|
Toshinobu Ogiso
|
Hiroya Takamura
|
Daichi Mochihashi
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation
pdf
Analyzing Semantic Changes in Japanese Words Using BERT
Kazuma Kobayashi
|
Taichi Aida
|
Mamoru Komachi
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation