Francesca Padovani


2025

pdf bib
Dialogue Is Not Enough to Make a Communicative BabyLM (But Neither Is Developmentally Inspired Reinforcement Learning)
Francesca Padovani | Bastian Bunzeck | Manar Ali | Omar Momen | Arianna Bisazza | Hendrik Buschmeier | Sina Zarrieß
Proceedings of the First BabyLM Workshop

We investigate whether pre-training exclusively on dialogue data results in formally and functionally apt small language models. Based on this pre-trained llamalogue model, we employ a variety of fine-tuning strategies to enforce “more communicative” text generations by our models. Although our models underperform on most standard BabyLM benchmarks, they excel at dialogue continuation prediction in a minimal pair setting. While PPO fine-tuning has mixed to adversarial effects on our models, DPO fine-tuning further improves their performance on our custom dialogue benchmark.

pdf bib
TurBLiMP: A Turkish Benchmark of Linguistic Minimal Pairs
Ezgi Başar | Francesca Padovani | Jaap Jumelet | Arianna Bisazza
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

We introduce TurBLiMP, the first Turkish benchmark of linguistic minimal pairs, designed to evaluate the linguistic abilities of monolingual and multilingual language models (LMs). Covering 16 linguistic phenomena with 1000 minimal pairs each, TurBLiMP fills an important gap in linguistic evaluation resources for Turkish. In designing the benchmark, we give extra attention to two properties of Turkish that remain understudied in current syntactic evaluations of LMs, namely word order flexibility and subordination through morphological processes. Our experiments on a wide range of LMs and a newly collected set of human acceptability judgments reveal that even cutting-edge Large LMs still struggle with grammatical phenomena that are not challenging for humans, and may also exhibit different sensitivities to word order and morphological complexity compared to humans.

pdf bib
Child-Directed Language Does Not Consistently Boost Syntax Learning in Language Models
Francesca Padovani | Jaap Jumelet | Yevgen Matusevych | Arianna Bisazza
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Seminal work by Huebner et al. (2021) showed that language models (LMs) trained on English Child-Directed Language (CDL) can outperform LMs trained on an equal amount of adult-directed text like Wikipedia. However, it remains unclear whether these results generalize across languages, architectures, and evaluation settings. We test this by comparing models trained on CDL vs. Wikipedia across two LM objectives (masked and causal), three languages (English, French, German), and three syntactic minimal pair benchmarks. Our results on these benchmarks show inconsistent benefits of CDL, which in most cases is outperformed by Wikipedia models. We then identify various shortcomings in these benchmarks, and introduce a novel testing methodology, FIT-CLAMS, which uses a frequency-controlled design to enable balanced comparisons across training corpora. Through minimal pair evaluations and regression analysis we show that training on CDL does not yield stronger generalizations for acquiring syntax and highlight the importance of controlling for frequency effects when evaluating syntactic ability.

2024

pdf bib
Automatic Text Simplification: A Comparative Study in Italian for Children with Language Disorders
Francesca Padovani | Caterina Marchesi | Eleonora Pasqua | Martina Galletti | Daniele Nardi
Proceedings of the 13th Workshop on Natural Language Processing for Computer Assisted Language Learning