Nina Tahmasebi


2022

pdf bib
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
Nina Tahmasebi | Syrielle Montariol | Andrey Kutuzov | Simon Hengchen | Haim Dubossarsky | Lars Borin
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change

2021

pdf
DWUG: A large Resource of Diachronic Word Usage Graphs in Four Languages
Dominik Schlechtweg | Nina Tahmasebi | Simon Hengchen | Haim Dubossarsky | Barbara McGillivray
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Word meaning is notoriously difficult to capture, both synchronically and diachronically. In this paper, we describe the creation of the largest resource of graded contextualized, diachronic word meaning annotation in four different languages, based on 100,000 human semantic proximity judgments. We describe in detail the multi-round incremental annotation process, the choice for a clustering algorithm to group usages into senses, and possible – diachronic and synchronic – uses for this dataset.

pdf
SuperSim: a test set for word similarity and relatedness in Swedish
Simon Hengchen | Nina Tahmasebi
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)

Language models are notoriously difficult to evaluate. We release SuperSim, a large-scale similarity and relatedness test set for Swedish built with expert human judgements. The test set is composed of 1,360 word-pairs independently judged for both relatedness and similarity by five annotators. We evaluate three different models (Word2Vec, fastText, and GloVe) trained on two separate Swedish datasets, namely the Swedish Gigaword corpus and a Swedish Wikipedia dump, to provide a baseline for future comparison. We will release the fully annotated test set, code, models, and data.

pdf bib
Proceedings of the 2nd International Workshop on Computational Approaches to Historical Language Change 2021
Nina Tahmasebi | Adam Jatowt | Yang Xu | Simon Hengchen | Syrielle Montariol | Haim Dubossarsky
Proceedings of the 2nd International Workshop on Computational Approaches to Historical Language Change 2021

2020

pdf bib
SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection
Dominik Schlechtweg | Barbara McGillivray | Simon Hengchen | Haim Dubossarsky | Nina Tahmasebi
Proceedings of the Fourteenth Workshop on Semantic Evaluation

Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.

2019

pdf bib
Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change
Nina Tahmasebi | Lars Borin | Adam Jatowt | Yang Xu
Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change

pdf
Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic Change
Haim Dubossarsky | Simon Hengchen | Nina Tahmasebi | Dominik Schlechtweg
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

State-of-the-art models of lexical semantic change detection suffer from noise stemming from vector space alignment. We have empirically tested the Temporal Referencing method for lexical semantic change and show that, by avoiding alignment, it is less affected by this noise. We show that, trained on a diachronic corpus, the skip-gram with negative sampling architecture with temporal referencing outperforms alignment models on a synthetic task as well as a manual testset. We introduce a principled way to simulate lexical semantic change and systematically control for possible biases.

2018

pdf
Generating a Gold Standard for a Swedish Sentiment Lexicon
Jacobo Rouces | Nina Tahmasebi | Lars Borin | Stian Rødven Eide
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
SenSALDO: Creating a Sentiment Lexicon for Swedish
Jacobo Rouces | Nina Tahmasebi | Lars Borin | Stian Rødven Eide
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2017

pdf bib
Proceedings of the 21st Nordic Conference on Computational Linguistics
Jörg Tiedemann | Nina Tahmasebi
Proceedings of the 21st Nordic Conference on Computational Linguistics

pdf bib
Parameter Transfer across Domains for Word Sense Disambiguation
Sallam Abualhaija | Nina Tahmasebi | Diane Forin | Karl-Heinz Zimmermann
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017

Word sense disambiguation is defined as finding the corresponding sense for a target word in a given context, which comprises a major step in text applications. Recently, it has been addressed as an optimization problem. The idea behind is to find a sequence of senses that corresponds to the words in a given context with a maximum semantic similarity. Metaheuristics like simulated annealing and D-Bees provide approximate good-enough solutions, but are usually influenced by the starting parameters. In this paper, we study the parameter tuning for both algorithms within the word sense disambiguation problem. The experiments are conducted on different datasets to cover different disambiguation scenarios. We show that D-Bees is robust and less sensitive towards the initial parameters compared to simulated annealing, hence, it is sufficient to tune the parameters once and reuse them for different datasets, domains or languages.

pdf
Finding Individual Word Sense Changes and their Delay in Appearance
Nina Tahmasebi | Thomas Risse
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017

We present a method for detecting word sense changes by utilizing automatically induced word senses. Our method works on the level of individual senses and allows a word to have e.g. one stable sense and then add a novel sense that later experiences change. Senses are grouped based on polysemy to find linguistic concepts and we can find broadening and narrowing as well as novel (polysemous and homonymic) senses. We evaluate on a testset, present recall and estimates of the time between expected and found change.

2015

pdf
A case study on supervised classification of Swedish pseudo-coordination
Malin Ahlberg | Peter Andersson | Markus Forsberg | Nina Tahmasebi
Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)

2014

pdf
Extractive Summarization using Continuous Vector Space Models
Mikael Kågebäck | Olof Mogren | Nina Tahmasebi | Devdatt Dubhashi
Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)

2012

pdf
NEER: An Unsupervised Method for Named Entity Evolution Recognition
Nina Tahmasebi | Gerhard Gossen | Nattiya Kanhabua | Helge Holzmann | Thomas Risse
Proceedings of COLING 2012

pdf
fokas: Formerly Known As – A Search Engine Incorporating Named Entity Evolution
Helge Holzmann | Gerhard Gossen | Nina Tahmasebi
Proceedings of COLING 2012: Demonstration Papers