Mikaela Keller


2024

pdf
To Word Senses and Beyond: Inducing Concepts with Contextualized Language Models
Bastien Liétard | Pascal Denis | Mikaela Keller
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Polysemy and synonymy are two crucial interrelated facets of lexicalambiguity. While both phenomena are widely documented in lexical resources and have been studied extensively in NLP,leading to dedicated systems, they are often being consideredindependently in practictal problems. While many tasks dealing with polysemy (e.g. Word SenseDisambiguiation or Induction) highlight the role of word’s senses,the study of synonymy is rooted in the study of concepts, i.e. meaningsshared across the lexicon. In this paper, we introduce ConceptInduction, the unsupervised task of learning a soft clustering amongwords that defines a set of concepts directly from data. This taskgeneralizes Word Sense Induction. We propose a bi-levelapproach to Concept Induction that leverages both a locallemma-centric view and a global cross-lexicon view to induceconcepts. We evaluate the obtained clustering on SemCor’s annotateddata and obtain good performance (BCubed F1 above0.60). We find that the local and the global levels are mutuallybeneficial to induce concepts and also senses in our setting. Finally,we create static embeddings representing our induced concepts and usethem on the Word-in-Context task, obtaining competitive performancewith the State-of-the-Art.

pdf
Towards an Onomasiological Study of Lexical Semantic Change Through the Induction of Concepts
Bastien Liétard | Mikaela Keller | Pascal Denis
Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change

2023

pdf
Fair Without Leveling Down: A New Intersectional Fairness Definition
Gaurav Maheshwari | Aurélien Bellet | Pascal Denis | Mikaela Keller
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

In this work, we consider the problem of intersectional group fairness in the classification setting, where the objective is to learn discrimination-free models in the presence of several intersecting sensitive groups. First, we illustrate various shortcomings of existing fairness measures commonly used to capture intersectional fairness. Then, we propose a new definition called the 𝛼-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups and can be seen as a generalization of the notion of differential fairness. We highlight several desirable properties of the proposed definition and analyze its relation to other fairness measures. Finally, we benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline. Our results reveal that the increase in fairness measured by previous definitions hides a “leveling down” effect, i.e., degrading the best performance over groups rather than improving the worst one.

pdf
A Tale of Two Laws of Semantic Change: Predicting Synonym Changes with Distributional Semantic Models
Bastien Lietard | Mikaela Keller | Pascal Denis
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)

Lexical Semantic Change is the study of how the meaning of words evolves through time. Another related question is whether and how lexical relations over pairs of words, such as synonymy, change over time. There are currently two competing, apparently opposite hypotheses in the historical linguistic literature regarding how synonymous words evolve: the Law of Differentiation (LD) argues that synonyms tend to take on different meanings over time, whereas the Law of Parallel Change (LPC) claims that synonyms tend to undergo the same semantic change and therefore remain synonyms. So far, there has been little research using distributional models to assess to what extent these laws apply on historical corpora. In this work, we take a first step toward detecting whether LD or LPC operates for given word pairs. After recasting the problem into a more tractable task, we combine two linguistic resources to propose the first complete evaluation framework on this problem and provide empirical evidence in favor of a dominance of LD. We then propose various computational approaches to the problem using Distributional Semantic Models and grounded in recent literature on Lexical Semantic Change detection. Our best approaches achieve a balanced accuracy above 0.6 on our dataset. We discuss challenges still faced by these approaches, such as polysemy or the potential confusion between synonymy and hypernymy.

2022

pdf
Fair NLP Models with Differentially Private Text Encoders
Gaurav Maheshwari | Pascal Denis | Mikaela Keller | Aurélien Bellet
Findings of the Association for Computational Linguistics: EMNLP 2022

Encoded text representations often capture sensitive attributes about individuals (e.g., race or gender), which raise privacy concerns and can make downstream models unfair to certain groups. In this work, we propose FEDERATE, an approach that combines ideas from differential privacy and adversarial training to learn private text representations which also induces fairer models. We empirically evaluate the trade-off between the privacy of the representations and the fairness and accuracy of the downstream model on four NLP datasets. Our results show that FEDERATE consistently improves upon previous methods, and thus suggest that privacy and fairness can positively reinforce each other.

2021

pdf bib
What Musical Knowledge Does Self-Attention Learn ?
Gabriel Loiseau | Mikaela Keller | Louis Bigo
Proceedings of the 2nd Workshop on NLP for Music and Spoken Audio (NLP4MusA)

2006

pdf
Investigating Lexical Substitution Scoring for Subtitle Generation
Oren Glickman | Ido Dagan | Walter Daelemans | Mikaela Keller | Samy Bengio
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)