Lina Conti


2025

pdf bib
The Unheard Alternative: Contrastive Explanations for Speech-to-Text Models
Lina Conti | Dennis Fucci | Marco Gaido | Matteo Negri | Guillaume Wisniewski | Luisa Bentivogli
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP

Contrastive explanations, which indicate why an AI system produced one output (the target) instead of another (the foil), are widely recognized in explainable AI as more informative and interpretable than standard explanations. However, obtaining such explanations for speech-to-text (S2T) generative models remains an open challenge. Adopting a feature attribution framework, we propose the first method to obtain contrastive explanations in S2T by analyzing how specific regions of the input spectrogram influence the choice between alternative outputs. Through a case study on gender translation in speech translation, we show that our method accurately identifies the audio features that drive the selection of one gender over another.

2023

pdf bib
Using Artificial French Data to Understand the Emergence of Gender Bias in Transformer Language Models
Lina Conti | Guillaume Wisniewski
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Numerous studies have demonstrated the ability of neural language models to learn various linguistic properties without direct supervision. This work takes an initial step towards exploring the less researched topic of how neural models discover linguistic properties of words, such as gender, as well as the rules governing their usage. We propose to use an artificial corpus generated by a PCFG based on French to precisely control the gender distribution in the training data and determine under which conditions a model correctly captures gender information or, on the contrary, appears gender-biased.