Ekaterina Goliakova


2025

pdf bib
Metric assessment protocol in the context of answer fluctuation on MCQ tasks
Ekaterina Goliakova | Xavier Renard | Marie-Jeanne Lesot | Thibault Laugel | Christophe Marsala | Marcin Detyniecki
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)

Using multiple-choice questions (MCQs) has become a standard for assessing LLM capabilities efficiently. A variety of metrics can be employed for this task. However, previous research has not conducted a thorough assessment of them. At the same time, MCQ evaluation suffers from answer fluctuation: models produce different results given slight changes in prompts. We suggest a metric assessment protocol in which evaluation methodologies are analyzed through their connection with fluctuation rates, as well as original performance. Our results show that there is a strong link between existing metrics and the answer changing, even when computed without any additional prompt variants. Highest association on the protocol is demonstrated by a novel metric, worst accuracy.

2024

pdf bib
What do BERT Word Embeddings Learn about the French Language?
Ekaterina Goliakova | David Langlois
Proceedings of the Sixth International Conference on Computational Linguistics in Bulgaria (CLIB 2024)

Pre-trained word embeddings (for example, BERT-like) have been successfully used in a variety of downstream tasks. However, do all embeddings, obtained from the models of the same architecture, encode information in the same way? Does the size of the model correlate to the quality of the information encoding? In this paper, we will attempt to dissect the dimensions of several BERT-like models that were trained on the French language to find where grammatical information (gender, plurality, part of speech) and semantic features might be encoded. In addition to this, we propose a framework for comparing the quality of encoding in different models.