Aleksandrs Berdičevskis

Also published as: Aleksandrs Berdicevskis


2023

pdf
Superlim: A Swedish Language Understanding Evaluation Benchmark
Aleksandrs Berdicevskis | Gerlof Bouma | Robin Kurtz | Felix Morger | Joey Öhman | Yvonne Adesam | Lars Borin | Dana Dannélls | Markus Forsberg | Tim Isbister | Anna Lindahl | Martin Malmsten | Faton Rekathati | Magnus Sahlgren | Elena Volodina | Love Börjeson | Simon Hengchen | Nina Tahmasebi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

We present Superlim, a multi-task NLP benchmark and analysis platform for evaluating Swedish language models, a counterpart to the English-language (Super)GLUE suite. We describe the dataset, the tasks, the leaderboard and report the baseline results yielded by a reference implementation. The tested models do not approach ceiling performance on any of the tasks, which suggests that Superlim is truly difficult, a desirable quality for a benchmark. We address methodological challenges, such as mitigating the Anglocentric bias when creating datasets for a less-resourced language; choosing the most appropriate measures; documenting the datasets and making the leaderboard convenient and transparent. We also highlight other potential usages of the dataset, such as, for instance, the evaluation of cross-lingual transfer learning.

pdf
Preparing a corpus of spoken Xhosa
Eva-Marie Bloom Ström | Onelisa Slater | Aron Zahran | Aleksandrs Berdicevskis | Anne Schumacher
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)

The aim of this paper is to describe ongoing work on an annotated corpus of spoken Xhosa. The data consists of natural spoken language and includes regional and social variation. We discuss encountered challenges with preparing such data from a lower-resourced language for corpus use. We describe the annotation, the search interface and the pilot experiments on automatic glossing of this highly agglutinative language.

pdf
DaLAJ-GED - a dataset for Grammatical Error Detection tasks on Swedish
Elena Volodina | Yousuf Ali Mohammed | Aleksandrs Berdicevskis | Gerlof Bouma | Joey Öhman
Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning

pdf
You say tomato, I say the same: A large-scale study of linguistic accommodation in online communities
Aleksandrs Berdicevskis | Viktor Erbro
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)

An important assumption in sociolinguistics and cognitive psychology is that human beings adjust their language use to their interlocutors. Put simply, the more often people talk (or write) to each other, the more similar their speech becomes. Such accommodation has often been observed in small-scale observational studies and experiments, but large-scale longitudinal studies that systematically test whether the accommodation occurs are scarce. We use data from a very large Swedish online discussion forum to show that linguistic production of the users who write in the same subforum does usually become more similar over time. Moreover, the results suggest that this trend tends to be stronger for those pairs of users who actively interact than for those pairs who do not interact. Our data thus support the accommodation hypothesis.

2021

pdf
Part-of-speech tagging of Swedish texts in the neural era
Yvonne Adesam | Aleksandrs Berdicevskis
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)

We train and test five open-source taggers, which use different methods, on three Swedish corpora, which are of comparable size but use different tagsets. The KB-Bert tagger achieves the highest accuracy for part-of-speech and morphological tagging, while being fast enough for practical use. We also compare the performance across tagsets and across different genres in one of the corpora. We perform manual error analysis and perform a statistical analysis of factors which affect how difficult specific tags are. Finally, we test ensemble methods, showing that a small (but not significant) improvement over the best-performing tagger can be achieved.

pdf bib
Successes and failures of Menzerath’s law at the syntactic level
Aleksandrs Berdicevskis
Proceedings of the Second Workshop on Quantitative Syntax (Quasy, SyntaxFest 2021)

2020

pdf
Foreigner-directed speech is simpler than native-directed: Evidence from social media
Aleksandrs Berdicevskis
Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science

I test two hypotheses that play an important role in modern sociolinguistics and language evolution studies: first, that non-native production is simpler than native; second, that production addressed to non-native speakers is simpler than that addressed to natives. The second hypothesis is particularly important for theories about contact-induced simplification, since the accommodation to non-natives may explain how the simplification can spread from adult learners to the whole community. To test the hypotheses, I create a very large corpus of native and non-native written speech in four languages (English, French, Italian, Spanish), extracting data from an internet forum where native languages of the participants are known and the structure of the interactions can be inferred. The corpus data yield inconsistent evidence with respect to the first hypothesis, but largely support the second one, suggesting that foreigner-directed speech is indeed simpler than native-directed. Importantly, when testing the first hypothesis, I contrast production of different speakers, which can introduce confounds and is a likely reason for the inconsistencies. When testing the second hypothesis, the comparison is always within the production of the same speaker (but with different addressees), which makes it more reliable.

pdf
A Diachronic Treebank of Russian Spanning More Than a Thousand Years
Aleksandrs Berdicevskis | Hanne Eckhoff
Proceedings of the Twelfth Language Resources and Evaluation Conference

We describe the Tromsø Old Russian and Old Church Slavonic Treebank (TOROT) that spans from the earliest Old Church Slavonic to modern Russian texts, covering more than a thousand years of continuous language history. We focus on the latest additions to the treebank, first of all, the modern subcorpus that was created by a high-quality conversion of the existing treebank of contemporary standard Russian (SynTagRus).

pdf
Cross-lingual Embeddings Reveal Universal and Lineage-Specific Patterns in Grammatical Gender Assignment
Hartger Veeman | Marc Allassonnière-Tang | Aleksandrs Berdicevskis | Ali Basirat
Proceedings of the 24th Conference on Computational Natural Language Learning

Grammatical gender is assigned to nouns differently in different languages. Are all factors that influence gender assignment idiosyncratic to languages or are there any that are universal? Using cross-lingual aligned word embeddings, we perform two experiments to address these questions about language typology and human cognition. In both experiments, we predict the gender of nouns in language X using a classifier trained on the nouns of language Y, and take the classifier’s accuracy as a measure of transferability of gender systems. First, we show that for 22 Indo-European languages the transferability decreases as the phylogenetic distance increases. This correlation supports the claim that some gender assignment factors are idiosyncratic, and as the languages diverge, the proportion of shared inherited idiosyncrasies diminishes. Second, we show that when the classifier is trained on two Afro-Asiatic languages and tested on the same 22 Indo-European languages (or vice versa), its performance is still significantly above the chance baseline, thus showing that universal factors exist and, moreover, can be captured by word embeddings. When the classifier is tested across families and on inanimate nouns only, the performance is still above baseline, indicating that the universal factors are not limited to biological sex.

pdf
Corpus evidence for word order freezing in Russian and German
Aleksandrs Berdicevskis | Alexander Piperski
Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020)

We use Universal Dependencies treebanks to test whether a well-known typological trade-off between word order freedom and richness of morphological marking of core arguments holds within individual languages. Using Russian and German treebank data, we show that the following phenomenon (sometimes dubbed word order freezing) does occur: those sentences where core arguments cannot be distinguished by morphological means (due to case syncretism or other kinds of ambiguity) have more rigid order of subject, verb and object than those where unambiguous morphological marking is present. In ambiguous clauses, word order is more often equal to the one which is default or dominant (most frequent) in the language. While Russian and German differ with respect to how exactly they mark core arguments, the effect of morphological ambiguity is significant in both languages. It is, however, small, suggesting that languages do adapt to the evolutionary pressure on communicative efficiency and avoidance of redundancy, but that the pressure is weak in this particular respect.

pdf
Subjects tend to be coded only once: Corpus-based and grammar-based evidence for an efficiency-driven trade-off
Aleksandrs Berdicevskis | Karsten Schmidtke-Bode | Ilja Seržant
Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories

2018

pdf bib
Using Universal Dependencies in cross-linguistic complexity research
Aleksandrs Berdicevskis | Çağrı Çöltekin | Katharina Ehret | Kilu von Prince | Daniel Ross | Bill Thompson | Chunxiao Yan | Vera Demberg | Gary Lupyan | Taraka Rama | Christian Bentz
Proceedings of the Second Workshop on Universal Dependencies (UDW 2018)

We evaluate corpus-based measures of linguistic complexity obtained using Universal Dependencies (UD) treebanks. We propose a method of estimating robustness of the complexity values obtained using a given measure and a given treebank. The results indicate that measures of syntactic complexity might be on average less robust than those of morphological complexity. We also estimate the validity of complexity measures by comparing the results for very similar languages and checking for unexpected differences. We show that some of those differences that arise can be diminished by using parallel treebanks and, more importantly from the practical point of view, by harmonizing the language-specific solutions in the UD annotation.

2016

pdf
Automatic parsing as an efficient pre-annotation tool for historical texts
Hanne Martine Eckhoff | Aleksandrs Berdičevskis
Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH)

Historical treebanks tend to be manually annotated, which is not surprising, since state-of-the-art parsers are not accurate enough to ensure high-quality annotation for historical texts. We test whether automatic parsing can be an efficient pre-annotation tool for Old East Slavic texts. We use the TOROT treebank from the PROIEL treebank family. We convert the PROIEL format to the CONLL format and use MaltParser to create syntactic pre-annotation. Using the most conservative evaluation method, which takes into account PROIEL-specific features, MaltParser by itself yields 0.845 unlabelled attachment score, 0.779 labelled attachment score and 0.741 secondary dependency accuracy (note, though, that the test set comes from a relatively simple genre and contains rather short sentences). Experiments with human annotators show that preparsing, if limited to sentences where no changes to word or sentence boundaries are required, increases their annotation rate. For experienced annotators, the speed gain varies from 5.80% to 16.57%, for inexperienced annotators from 14.61% to 32.17% (using conservative estimates). There are no strong reliable differences in the annotation accuracy, which means that there is no reason to suspect that using preparsing might lower the final annotation quality.

pdf
Learning pressures reduce morphological complexity: Linking corpus, computational and experimental evidence
Christian Bentz | Aleksandrs Berdicevskis
Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)

The morphological complexity of languages differs widely and changes over time. Pathways of change are often driven by the interplay of multiple competing factors, and are hard to disentangle. We here focus on a paradigmatic scenario of language change: the reduction of morphological complexity from Latin towards the Romance languages. To establish a causal explanation for this phenomenon, we employ three lines of evidence: 1) analyses of parallel corpora to measure the complexity of words in actual language production, 2) applications of NLP tools to further tease apart the contribution of inflectional morphology to word complexity, and 3) experimental data from artificial language learning, which illustrate the learning pressures at play when morphology simplifies. These three lines of evidence converge to show that pressures associated with imperfect language learning are good candidates to causally explain the reduction in morphological complexity in the Latin-to-Romance scenario. More generally, we argue that combining corpus, computational and experimental evidence is the way forward in historical linguistics and linguistic typology.

2015

pdf
Estimating Grammeme Redundancy by Measuring Their Importance for Syntactic Parser Performance
Aleksandrs Berdicevskis
Proceedings of the Sixth Workshop on Cognitive Aspects of Computational Language Learning