Mattia Proietti
2026
Mechanistic Interpretability Meets Cognitive Linguistics: Modelling Locative Image Schemas in the Circuit Framework
Mattia Proietti | Afra Alishahi | Grzegorz Chrupała | Alessandro Lenci
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Mattia Proietti | Afra Alishahi | Grzegorz Chrupała | Alessandro Lenci
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large Language Models are often considered the best computational testbeds for linguistic theorisation at our disposal. However, their inner workings remain largely opaque, and the mechanisms behind their behaviour cannot always be easily connected with theoretical linguistic assumptions. Mechanistic Interpretability (MI) is surging as a specialised field to reverse engineer models’ internals and shed light on the causal relationships happening under the hood. Nevertheless, MI is predominantly focused on AI-Safety problems, and the attempts to understand linguistically motivated behaviours with these tools are still limited. In this work, we investigate whether an LLM, namely LlaMA-3.2-1b, has developed specialised mechanisms governing the selection of the locative preposition in simple copular clauses. To frame the problem as a next-token prediction objective, we introduce the Stranded Locative Preposition Selection task along with a small dataset aptly curated to test it. We make use of several MI tools to scan the model’s internals and relate their mechanisms to classic theory in Cognitive Linguistics, which assumes that the two basic locative prepositions in and on are the respective linguistic encoding of two different Image Schemas: Containment and Surface
2025
Leveraging LLMs to Build a Semi-synthetic Dataset for Legal Information Retrieval: A Case Study on the Italian Civil Code and GPT4-O
Mattia Proietti | Lucia C. Passaro | Alessandro Lenci
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)
Mattia Proietti | Lucia C. Passaro | Alessandro Lenci
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)
2022
Does BERT Recognize an Agent? Modeling Dowty’s Proto-Roles with Contextual Embeddings
Mattia Proietti | Gianluca Lebani | Alessandro Lenci
Proceedings of the 29th International Conference on Computational Linguistics
Mattia Proietti | Gianluca Lebani | Alessandro Lenci
Proceedings of the 29th International Conference on Computational Linguistics
Contextual embeddings build multidimensional representations of word tokens based on their context of occurrence. Such models have been shown to achieve a state-of-the-art performance on a wide variety of tasks. Yet, the community struggles in understanding what kind of semantic knowledge these representations encode. We report a series of experiments aimed at investigating to what extent one of such models, BERT, is able to infer the semantic relations that, according to Dowty’s Proto-Roles theory, a verbal argument receives by virtue of its role in the event described by the verb. This hypothesis were put to test by learning a linear mapping from the BERT’s verb embeddings to an interpretable space of semantic properties built from the linguistic dataset by White et al. (2016). In a first experiment we tested whether the semantic properties inferred from a typed version of the BERT embeddings would be more linguistically plausible than those produced by relying on static embeddings. We then move to evaluate the semantic properties inferred from the contextual embeddings both against those available in the original dataset, as well as by assessing their ability to model the semantic properties possessed by the agent of the verbs participating in the so-called causative alternation.