Lina Conti


2025

Contrastive explanations, which indicate why an AI system produced one output (the target) instead of another (the foil), are widely recognized in explainable AI as more informative and interpretable than standard explanations. However, obtaining such explanations for speech-to-text (S2T) generative models remains an open challenge. Adopting a feature attribution framework, we propose the first method to obtain contrastive explanations in S2T by analyzing how specific regions of the input spectrogram influence the choice between alternative outputs. Through a case study on gender translation in speech translation, we show that our method accurately identifies the audio features that drive the selection of one gender over another.

2023

Numerous studies have demonstrated the ability of neural language models to learn various linguistic properties without direct supervision. This work takes an initial step towards exploring the less researched topic of how neural models discover linguistic properties of words, such as gender, as well as the rules governing their usage. We propose to use an artificial corpus generated by a PCFG based on French to precisely control the gender distribution in the training data and determine under which conditions a model correctly captures gender information or, on the contrary, appears gender-biased.