Arturs Znotins

Also published as: Artūrs Znotiņš


2022

pdf
Latvian National Corpora Collection – Korpuss.lv
Baiba Saulite | Roberts Darģis | Normunds Gruzitis | Ilze Auzina | Kristīne Levāne-Petrova | Lauma Pretkalniņa | Laura Rituma | Peteris Paikens | Arturs Znotins | Laine Strankale | Kristīne Pokratniece | Ilmārs Poikāns | Guntis Barzdins | Inguna Skadiņa | Anda Baklāne | Valdis Saulespurēns | Jānis Ziediņš
Proceedings of the Thirteenth Language Resources and Evaluation Conference

LNCC is a diverse collection of Latvian language corpora representing both written and spoken language and is useful for both linguistic research and language modelling. The collection is intended to cover diverse Latvian language use cases and all the important text types and genres (e.g. news, social media, blogs, books, scientific texts, debates, essays, etc.), taking into account both quality and size aspects. To reach this objective, LNCC is a continuous multi-institutional and multi-project effort, supported by the Digital Humanities and Language Technology communities in Latvia. LNCC includes a broad range of Latvian texts from the Latvian National Library, Culture Information Systems Centre, Latvian National News Agency, Latvian Parliament, Latvian web crawl, various Latvian publishers, and from the Latvian language corpora created by Institute of Mathematics and Computer Science and its partners, including spoken language corpora. All corpora of LNCC are re-annotated with a uniform morpho-syntactic annotation scheme which enables federated search and consistent linguistics analysis in all the LNCC corpora, as well as facilitates to select and mix various corpora for pre-training large Latvian language models like BERT and GPT.

2021

pdf
Domain Expert Platform for Goal-Oriented Dialog Collection
Didzis Goško | Arturs Znotins | Inguna Skadina | Normunds Gruzitis | Gunta Nešpore-Bērzkalne
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations

Today, most dialogue systems are fully or partly built using neural network architectures. A crucial prerequisite for the creation of a goal-oriented neural network dialogue system is a dataset that represents typical dialogue scenarios and includes various semantic annotations, e.g. intents, slots and dialogue actions, that are necessary for training a particular neural network architecture. In this demonstration paper, we present an easy to use interface and its back-end which is oriented to domain experts for the collection of goal-oriented dialogue samples. The platform not only allows to collect or write sample dialogues in a structured way, but also provides a means for simple annotation and interpretation of the dialogues. The platform itself is language-independent; it depends only on the availability of particular language processing components for a specific language. It is currently being used to collect dialogue samples in Latvian (a highly inflected language) which represent typical communication between students and the student service.

2018

pdf
Creation of a Balanced State-of-the-Art Multilayer Corpus for NLU
Normunds Gruzitis | Lauma Pretkalnina | Baiba Saulite | Laura Rituma | Gunta Nespore-Berzkalne | Arturs Znotins | Peteris Paikens
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Multilingual Clustering of Streaming News
Sebastião Miranda | Artūrs Znotiņš | Shay B. Cohen | Guntis Barzdins
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual clusters. Unlike typical clustering approaches that report results on datasets with a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. In our formulation, the monolingual clusters group together documents while the crosslingual clusters group together monolingual clusters, one per language that appears in the stream. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish.

2014

pdf
Coreference Resolution for Latvian
Artūrs Znotiņš | Pēteris Paikens
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Coreference resolution (CR) is a current problem in natural language processing (NLP) research and it is a key task in applications such as question answering, text summarization and information extraction for which text understanding is of crucial importance. We describe an implementation of coreference resolution tools for Latvian language, developed as a part of a tool chain for newswire text analysis but usable also as a separate, publicly available module. LVCoref is a rule based CR system that uses entity centric model that encourages the sharing of information across all mentions that point to the same real-world entity. The system is developed to provide starting ground for further experiments and generate a reference baseline to be compared with more advanced rule-based and machine learning based future coreference resolvers. It now reaches 66.6 F-score using predicted mentions and 78.1% F-score using gold mentions. This paper describes current efforts to create a CR system and to improve NER performance for Latvian. Task also includes creation of the corpus of manually annotated coreference relations.

pdf
Dependency parsing representation effects on the accuracy of semantic applications — an example of an inflective language
Lauma Pretkalniņa | Artūrs Znotiņš | Laura Rituma | Didzis Goško
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper we investigate how different dependency representations of a treebank influence the accuracy of the dependency parser trained on this treebank and the impact on several parser applications: named entity recognition, coreference resolution and limited semantic role labeling. For these experiments we use Latvian Treebank, whose native annotation format is dependency based hybrid augmented with phrase-like elements. We explore different representations of coordinations, complex predicates and punctuation mark attachment. Our experiments shows that parsers trained on the variously transformed treebanks vary significantly in their accuracy, but the best-performing parser as measured by attachment score not always leads to best accuracy for an end application.