Réka Cserháti
2022
Codenames as a Game of Co-occurrence Counting
Réka Cserháti
|
Istvan Kollath
|
András Kicsi
|
Gábor Berend
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Codenames is a popular board game, in which knowledge and cooperation between players play an important role. The task of a player playing as a spymaster is to find words (clues) that a teammate finds related to as many of some given words as possible, but not to other specified words. This is a hard challenge even with today’s advanced language technology methods.In our study, we create spymaster agents using four types of relatedness measures that require only a raw text corpus to produce. These include newly introduced ones based on co-occurrences, which outperform FastText cosine similarity on gold standard relatedness data. To generate clues in Codenames, we combine relatedness measures with four different scoring functions, for two languages, English and Hungarian. For testing, we collect decisions of human guesser players in an online game, and our configurations outperform previous agents among methods using raw corpora only.
EENLP: Cross-lingual Eastern European NLP Index
Alexey Tikhonov
|
Alex Malkhasov
|
Andrey Manoshin
|
George-Andrei Dima
|
Réka Cserháti
|
Md.Sadek Hossain Asif
|
Matt Sárdi
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Motivated by the sparsity of NLP resources for Eastern European languages, we present a broad index of existing Eastern European language resources (90+ datasets and 45+ models) published as a github repository open for updates from the community. Furthermore, to support the evaluation of commonsense reasoning tasks, we provide hand-crafted cross-lingual datasets for five different semantic tasks (namely news categorization, paraphrase detection, Natural Language Inference (NLI) task, tweet sentiment detection, and news sentiment detection) for some of the Eastern European languages. We perform several experiments with the existing multilingual models on these datasets to define the performance baselines and compare them to the existing results for other languages.
2021
Identifying the Importance of Content Overlap for Better Cross-lingual Embedding Mappings
Réka Cserháti
|
Gábor Berend
Proceedings of the 1st Workshop on Multilingual Representation Learning
In this work, we analyze the performance and properties of cross-lingual word embedding models created by mapping-based alignment methods. We use several measures of corpus and embedding similarity to predict BLI scores of cross-lingual embedding mappings over three types of corpora, three embedding methods and 55 language pairs. Our experimental results corroborate that instead of mere size, the amount of common content in the training corpora is essential. This phenomenon manifests in that i) despite of the smaller corpus sizes, using only the comparable parts of Wikipedia for training the monolingual embedding spaces to be mapped is often more efficient than relying on all the contents of Wikipedia, ii) the smaller, in return less diversified Spanish Wikipedia works almost always much better as a training corpus for bilingual mappings than the ubiquitously used English Wikipedia.
Search
Co-authors
- Gábor Berend 2
- Istvan Kollath 1
- András Kicsi 1
- Alexey Tikhonov 1
- Alex Malkhasov 1
- show all...