Danilo Croce
2026
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Danilo Croce | Jochen Leidner | Nafise Sadat Moosavi
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Danilo Croce | Jochen Leidner | Nafise Sadat Moosavi
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
2025
Modeling Background Knowledge with Frame Semantics for Fine-grained Sentiment Classification
Muhammad Okky Ibrohim | Valerio Basile | Danilo Croce | Cristina Bosco | Roberto Basili
Proceedings of the 2nd Workshop on Analogical Abstraction in Cognition, Perception, and Language (Analogy-Angle II)
Muhammad Okky Ibrohim | Valerio Basile | Danilo Croce | Cristina Bosco | Roberto Basili
Proceedings of the 2nd Workshop on Analogical Abstraction in Cognition, Perception, and Language (Analogy-Angle II)
Few-shot learning via in-context learning (ICL) is widely used in NLP, but its effectiveness is highly sensitive to example selection, often leading to unstable performance. To address this, we introduce BacKGen, a framework for generating structured Background Knowledge (BK) as an alternative to instance-based prompting. Our approach leverages Frame Semantics to uncover recurring conceptual patterns across data instances, clustering examples based on shared event structures and semantic roles. These patterns are then synthesized into generalized knowledge statements using a large language model (LLM) and injected into prompts to support contextual reasoning beyond surface-level cues. We apply BacKGen to Sentiment Phrase Classification (SPC), a task where polarity judgments frequently depend on implicit commonsense knowledge. In this setting, BK serves as an abstract representation of prototypical scenarios, enabling schematic generalization to help the model perform analogical reasoning by mapping new inputs onto generalized event structures. Experimental results with Mistral-7B and Llama3-8B demonstrate that BK-based prompting consistently outperforms standard few-shot approaches, achieving up to 29.94% error reduction.
Evaluating Large Language Models on Wikipedia Graph Navigation: Insights from the WikiGame
Daniele Margiotta | Danilo Croce | Roberto Basili
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)
Daniele Margiotta | Danilo Croce | Roberto Basili
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)
Automatic GRI-SDG Annotation and LLM-Based Filtering for Sustainability Reports
Seyed Alireza Mousavian Anaraki | Danilo Croce | Roberto Basili
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)
Seyed Alireza Mousavian Anaraki | Danilo Croce | Roberto Basili
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)
Grounded Semantic Role Labelling from Synthetic Multimodal Data for Situated Robot Commands
Claudiu Daniel Hromei | Antonio Scaiella | Danilo Croce | Roberto Basili
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Claudiu Daniel Hromei | Antonio Scaiella | Danilo Croce | Roberto Basili
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Understanding natural language commands in situated Human-Robot Interaction (HRI) requires linking linguistic input to perceptual context. Traditional symbolic parsers lack the flexibility to operate in complex, dynamic environments. We introduce a novel Multimodal Grounded Semantic Role Labelling (G-SRL) framework that combines frame semantics with perceptual grounding, enabling robots to interpret commands via multimodal logical forms. Our approach leverages modern Visual Language Models (VLLMs), which jointly process text and images, and is supported by an automated pipeline that generates high-quality training data. Structured command annotations are converted into photorealistic scenes via LLM-guided prompt engineering and diffusion models, then rigorously validated through object detection and visual question answering. The pipeline produces over 11,000 image-command pairs (3,500+ manually validated), while approaching the quality of manually curated datasets at significantly lower cost.
Sanskrit Voyager: Unified Web Platform for Interactive Reading and Linguistic Analysis of Sanskrit Texts
Giacomo De Luca | Danilo Croce | Roberto Basili
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Giacomo De Luca | Danilo Croce | Roberto Basili
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Sanskrit Voyager is a web application for searching, reading, and analyzing the texts in the Sanskrit literary corpus. Unlike previous tools that require expert linguistic knowledge or manual normalization, Sanskrit Voyager enables users to search for words and phrases as they actually appear in texts, handling inflection, sandhi, and compound forms automatically while supporting any transliteration. The system integrates four core functionalities: (1) multi-dictionary lookup with morphological analysis and inflection tables; (2) real-time text parsing and annotation; (3) an interactive reader for over 900 digitalized texts; and (4) advanced corpus search with fuzzy matching and filtering. Evaluation shows over 92% parsing accuracy on complex compounds and substantially higher search recall than BuddhaNexus on challenging queries. Source code is publicly available under CC-BY-NC license, resource-efficient, and designed for both learners and researchers, offering the first fully integrated, user-friendly platform for computational Sanskrit studies.
Training Multi-Modal LLMs through Dialogue Planning for HRI
Claudiu Daniel Hromei | Federico Borazio | Andrea Sensi | Elisa Passone | Danilo Croce | Roberto Basili
Findings of the Association for Computational Linguistics: ACL 2025
Claudiu Daniel Hromei | Federico Borazio | Andrea Sensi | Elisa Passone | Danilo Croce | Roberto Basili
Findings of the Association for Computational Linguistics: ACL 2025
Grounded natural language understanding in Human-Robot Interaction (HRI) requires integrating linguistic, visual, and world knowledge to ensure effective task execution. We propose an approach that enhances Multi-Modal Large Language Models (MLLMs) with a novel explicit dialogue planning phase, allowing robotic agents to systematically refine their understanding of ambiguous commands through structured clarification steps. This reduces hallucinations and improves task feasibility.To evaluate this approach, we introduce a novel dataset of over 1,100 annotated dialogues in English and Italian, designed for fine-tuning and assessing Multi-Modal models in HRI scenarios. Experimental results show that dialogue planning improves response accuracy and quality, and contributes to cross-lingual generalisation, enabling models trained in one language to transfer effectively to another. To the best of our knowledge, this is the first application of structured, goal-driven, and explicit dialogue planning in Multi-Modal LLMs for grounded interaction.
Unsupervised Sustainability Report Labeling based on the integration of the GRI and SDG standards
Seyed Alireza Mousavian Anaraki | Danilo Croce | Roberto Basili
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)
Seyed Alireza Mousavian Anaraki | Danilo Croce | Roberto Basili
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)
Sustainability reports are key instruments for communicating corporate impact, but their unstructured format and varied content pose challenges for large-scale analysis. This paper presents an unsupervised method to annotate paragraphs from sustainability reports against both the Global Reporting Initiative (GRI) and Sustainable Development Goals (SDG) standards. The approach combines structured metadata from GRI content indexes, official GRI–SDG mappings, and text semantic similarity models to produce weakly supervised annotations at scale. To evaluate the quality of these annotations, we train a multi-label classifier on the automatically labeled data and evaluate it on the trusted OSDG Community Dataset. The results show that our method yields meaningful labels and improves classification performance when combined with human-annotated data. Although preliminary, this work offers a foundation for scalable sustainability analysis and opens future directions toward assessing the credibility and depth of corporate sustainability claims.
Injecting Frame Semantics into Large Language Models via Prompt-Based Fine-Tuning
Shahid Iqbal Rai | Danilo Croce | Roberto Basili
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Shahid Iqbal Rai | Danilo Croce | Roberto Basili
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Large Language Models (LLMs) have demonstrated remarkable generalization across diverse NLP tasks, yet they often produce outputs lacking semantic coherence due to insufficient grounding in structured linguistic knowledge. This paper proposes a novel method for injecting Frame Semantics into a pretrained LLaMA model using Low-Rank Adaptation (LoRA). Leveraging FrameNet (a rich resource of over 1,000 semantic frames) we construct a training corpus comprising structured triples of frame definitions, frame elements, and lexical units. Our method encodes these examples into the model via LoRA adapters and evaluates performance using zero-shot prompting for textual entailment and semantic role labeling (SRL) over Framenet. Experimental results show that our adapted frame-aware LLM substantially outperforms the baseline across closed, open-ended, and multiple-choice prompts. Moreover, we observe significant improvements in SRL accuracy, demonstrating the efficacy of combining frame-semantic theory with parameter-efficient pretraining.
2024
La Non Canonica L’hai Studiata? Exploring LLMs and Sentence Canonicity in Italian
Claudiu Daniel Hromei | Danilo Croce | Rodolfo Delmonte | Roberto Basili
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
Claudiu Daniel Hromei | Danilo Croce | Rodolfo Delmonte | Roberto Basili
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
This paper investigates the ability of Large Language Models (LLMs) to differentiate between canonical and non-canonical sentences in Italian, employing advanced neural architectures like LLaMA and its adaptations. Canonical sentences adhere to the standard Subject-Verb-Object (SVO) structure. We hypothesize that recent generative LLMs are influenced heavily by the English language, where non-canonical structures are very rare. Using the in-context learning technique, we probe these models and further fine-tune them for this specific task. Initial results indicate that these models continue to struggle with this task even after fine-tuning. Additionally, we introduce a new dataset comprising several hundred sentences from the poetry domain, which presents significant challenges for the canonical structure task.
Leveraging Large Language Models for Fact Verification in Italian
Antonio Scaiella | Stefano Costanzo | Elisa Passone | Danilo Croce | Giorgio Gambosi
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
Antonio Scaiella | Stefano Costanzo | Elisa Passone | Danilo Croce | Giorgio Gambosi
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
In recent years, Automatic Fact Checking has become a crucial tool in combating fake news, leveraging AI to verify the accuracy of information. Despite significant advancements, most datasets and models are predominantly available in English, posing challenges for other languages. This paper presents an Italian resource based on the dataset made available in the FEVER evaluation campaign, created to train and evaluate fact-checking models in Italian. The dataset comprises approximately 240k examples, with over 2k test examples manually validated. Additionally, we fine-tuned a state-of-the-art LLM, namely LLaMA3, on both the original English and translated Italian datasets, demonstrating that fine-tuning significantly improves model performance. Our results suggest that the fine-tuned models achieve comparable accuracy in both languages, highlighting the value of the proposed resource.
CALAMITA: Challenge the Abilities of LAnguage Models in ITAlian
Giuseppe Attanasio | Pierpaolo Basile | Federico Borazio | Danilo Croce | Maria Francis | Jacopo Gili | Elio Musacchio | Malvina Nissim | Viviana Patti | Matteo Rinaldi | Daniel Scalena
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
Giuseppe Attanasio | Pierpaolo Basile | Federico Borazio | Danilo Croce | Maria Francis | Jacopo Gili | Elio Musacchio | Malvina Nissim | Viviana Patti | Matteo Rinaldi | Daniel Scalena
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
The rapid development of Large Language Models (LLMs) has called for robust benchmarks to assess their abilities, track progress, and compare iterations. While existing benchmarks provide extensive evaluations across diverse tasks, they predominantly focus on English, leaving other languages underserved. For Italian, the EVALITA campaigns have provided a long-standing tradition of classification-focused shared tasks. However, their scope does not fully align with the nuanced evaluation required for modern LLMs. To address this gap, we introduce “Challenge the Abilities of LAnguage Models in ITAlian” (CALAMITA), a collaborative effort to create a dynamic and growing benchmark tailored to Italian. CALAMITA emphasizes diversity in task design to test a wide range of LLM capabilities through resources natively developed in Italian by the community. This initiative includes a shared platform, live leaderboard, and centralized evaluation framework. This paper outlines the collaborative process, initial challenges, and evaluation framework of CALAMITA.
MM-IGLU: Multi-Modal Interactive Grounded Language Understanding
Claudiu Daniel Hromei | Daniele Margiotta | Danilo Croce | Roberto Basili
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Claudiu Daniel Hromei | Daniele Margiotta | Danilo Croce | Roberto Basili
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
This paper explores Interactive Grounded Language Understanding (IGLU) challenges within Human-Robot Interaction (HRI). In this setting, a robot interprets user commands related to its environment, aiming to discern whether a specific command can be executed. If faced with ambiguities or incomplete data, the robot poses relevant clarification questions. Drawing from the NeurIPS 2022 IGLU competition, we enrich the dataset by introducing our multi-modal data and natural language descriptions in MM-IGLU: Multi-Modal Interactive Grounded Language Understanding. Utilizing a BART-based model that integrates the user’s statement with the environment’s description, and a cutting-edge Multi-Modal Large Language Model that merges both visual and textual data, we offer a valuable resource for ongoing research in the domain. Additionally, we discuss the evaluation methods for such tasks, highlighting potential limitations imposed by traditional string-match-based evaluations on this intricate multi-modal challenge. Moreover, we provide an evaluation benchmark based on human judgment to address the limits and capabilities of such baseline models. This resource is released on a dedicated GitHub repository at https://github.com/crux82/MM-IGLU.
Semi-Automatic Topic Discovery and Classification for Epidemic Intelligence via Large Language Models
Federico Borazio | Danilo Croce | Giorgio Gambosi | Roberto Basili | Daniele Margiotta | Antonio Scaiella | Martina Del Manso | Daniele Petrone | Andrea Cannone | Alberto M. Urdiales | Chiara Sacco | Patrizio Pezzotti | Flavia Riccardo | Daniele Mipatrini | Federica Ferraro | Sobha Pilati
Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024
Federico Borazio | Danilo Croce | Giorgio Gambosi | Roberto Basili | Daniele Margiotta | Antonio Scaiella | Martina Del Manso | Daniele Petrone | Andrea Cannone | Alberto M. Urdiales | Chiara Sacco | Patrizio Pezzotti | Flavia Riccardo | Daniele Mipatrini | Federica Ferraro | Sobha Pilati
Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024
This paper introduces a novel framework to harness Large Language Models (LLMs) for Epidemic Intelligence, focusing on identifying and categorizing emergent socio-political phenomena within health crises, with a spotlight on the COVID-19 pandemic. Our approach diverges from traditional methods, such as Topic Models, by providing explicit support to analysts through the identification of distinct thematic areas and the generation of clear, actionable statements for each topic. This supports a Zero-shot Classification mechanism, enabling effective matching of news articles to fine-grain topics without the need for model fine-tuning. The framework is designed to be as transparent as possible, producing linguistically informed insights to make the analysis more accessible to analysts who may not be familiar with every subject matter of inherently emerging phenomena. This process not only enhances the precision and relevance of the extracted Epidemic Intelligence but also fosters a collaborative environment where system linguistic abilities and the analyst’s domain expertise are integrated.
2023
End-to-end Dependency Parsing via Auto-regressive Large Language Model
Claudiu Daniel Hromei | Danilo Croce | Roberto Basili
Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023)
Claudiu Daniel Hromei | Danilo Croce | Roberto Basili
Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023)
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Danilo Croce | Luca Soldaini
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Danilo Croce | Luca Soldaini
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
2022
Learning to Generate Examples for Semantic Processing Tasks
Danilo Croce | Simone Filice | Giuseppe Castellucci | Roberto Basili
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Danilo Croce | Simone Filice | Giuseppe Castellucci | Roberto Basili
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Even if recent Transformer-based architectures, such as BERT, achieved impressive results in semantic processing tasks, their fine-tuning stage still requires large scale training resources. Usually, Data Augmentation (DA) techniques can help to deal with low resource settings. In Text Classification tasks, the objective of DA is the generation of well-formed sentences that i) represent the desired task category and ii) are novel with respect to existing sentences. In this paper, we propose a neural approach to automatically learn to generate new examples using a pre-trained sequence-to-sequence model. We first learn a task-oriented similarity function that we use to pair similar examples. Then, we use these example pairs to train a model to generate examples. Experiments in low resource settings show that augmenting the training material with the proposed strategy systematically improves the results on text classification and natural language inference tasks by up to 10% accuracy, outperforming existing DA approaches.
2021
Learning to Solve NLP Tasks in an Incremental Number of Languages
Giuseppe Castellucci | Simone Filice | Danilo Croce | Roberto Basili
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Giuseppe Castellucci | Simone Filice | Danilo Croce | Roberto Basili
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
In real scenarios, a multilingual model trained to solve NLP tasks on a set of languages can be required to support new languages over time. Unfortunately, the straightforward retraining on a dataset containing annotated examples for all the languages is both expensive and time-consuming, especially when the number of target languages grows. Moreover, the original annotated material may no longer be available due to storage or business constraints. Re-training only with the new language data will inevitably result in Catastrophic Forgetting of previously acquired knowledge. We propose a Continual Learning strategy that updates a model to support new languages over time, while maintaining consistent results on previously learned languages. We define a Teacher-Student framework where the existing model “teaches” to a student model its knowledge about the languages it supports, while the student is also trained on a new language. We report an experimental evaluation in several tasks including Sentence Classification, Relational Learning and Sequence Labeling.
GQA-it: Italian Question Answering on Image Scene Graphs
Danilo Croce | Lucia C. Passaro | Alessandro Lenci | Roberto Basili
Proceedings of the Eighth Italian Conference on Computational Linguistics (CLiC-it 2021)
Danilo Croce | Lucia C. Passaro | Alessandro Lenci | Roberto Basili
Proceedings of the Eighth Italian Conference on Computational Linguistics (CLiC-it 2021)
2020
GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples
Danilo Croce | Giuseppe Castellucci | Roberto Basili
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Danilo Croce | Giuseppe Castellucci | Roberto Basili
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples. In many real scenarios, obtaining high- quality annotated data is expensive and time consuming; in contrast, unlabeled examples characterizing the target task can be, in general, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi- Supervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that ex- tends the fine-tuning of BERT-like architectures with unlabeled data in a generative adversarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 annotated examples), still obtaining good performances in several sentence classification tasks.
Automatic Induction of FrameNet lexical units in Italian
Silvia Brambilla | Danilo Croce | Fabio Tamburini | Roberto Basili
Proceedings of the Seventh Italian Conference on Computational Linguistics (CLiC-it 2020)
Silvia Brambilla | Danilo Croce | Fabio Tamburini | Roberto Basili
Proceedings of the Seventh Italian Conference on Computational Linguistics (CLiC-it 2020)
2019
Deep Bidirectional Transformers for Italian Question Answering
Danilo Croce | Giorgio Brandi | Roberto Basili
Proceedings of the Sixth Italian Conference on Computational Linguistics (CLiC-it 2019)
Danilo Croce | Giorgio Brandi | Roberto Basili
Proceedings of the Sixth Italian Conference on Computational Linguistics (CLiC-it 2019)
2018
On the Readability of Deep Learning Models: the role of Kernel-based Deep Architectures
Danilo Croce | Daniele Rossini | Roberto Basili
Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)
Danilo Croce | Daniele Rossini | Roberto Basili
Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)
2017
Monitoring Adolescents’ Distress using Social Web data as a Source: the InsideOut Project
Roberto Basili | Valentina Bellomaria | Niels Jonas Bugge | Danilo Croce | Francesco De Michele | Federico Fiori Nastro | Paolo Fiori Nastro | Chantal Michel | Stefanie Schmidt | Frauke Schultze-Lutter
Proceedings of the Fourth Italian Conference on Computational Linguistics (CLiC-it 2017)
Roberto Basili | Valentina Bellomaria | Niels Jonas Bugge | Danilo Croce | Francesco De Michele | Federico Fiori Nastro | Paolo Fiori Nastro | Chantal Michel | Stefanie Schmidt | Frauke Schultze-Lutter
Proceedings of the Fourth Italian Conference on Computational Linguistics (CLiC-it 2017)
Search
Fix author
Co-authors
- Roberto Basili 44
- Giuseppe Castellucci 10
- Simone Filice 7
- Claudiu Daniel Hromei 5
- Paolo Annesi 3
- Emanuele Bastianelli 3
- Federico Borazio 3
- Diego De Cao 3
- Daniele Margiotta 3
- Alessandro Moschitti 3
- Daniele Nardi 3
- Daniele Rossini 3
- Antonio Scaiella 3
- Silvia Brambilla 2
- Giorgio Gambosi 2
- Seyed Alireza Mousavian Anaraki 2
- Elisa Passone 2
- Marco Pennacchiotti 2
- Valerio Storch 2
- Fabio Tamburini 2
- Andrea Vanzo 2
- Giuseppe Attanasio 1
- Pierpaolo Basile 1
- Valerio Basile 1
- Valentina Bellomaria 1
- Cristina Bosco 1
- Giorgio Brandi 1
- Niels Jonas Bugge 1
- Andrea Cannone 1
- Stefano Costanzo 1
- Giacomo De Luca 1
- Francesco De Michele 1
- Martina Del Manso 1
- Rodolfo Delmonte 1
- Federica Ferraro 1
- Federico Fiori Nastro 1
- Paolo Fiori Nastro 1
- Maria Francis 1
- Cristina Giannone 1
- Jacopo Gili 1
- Muhammad Okky Ibrohim 1
- Luca Iocchi 1
- Jochen L. Leidner 1
- Alessandro Lenci 1
- Caterina Masotti 1
- Chantal Michel 1
- Daniele Mipatrini 1
- Nafise Sadat Moosavi 1
- Elio Musacchio 1
- Malvina Nissim 1
- Martha Palmer 1
- Lucia C. Passaro 1
- Viviana Patti 1
- Daniele Petrone 1
- Patrizio Pezzotti 1
- Sobha Pilati 1
- Daniele Previtali 1
- Shahid Iqbal Rai 1
- Flavia Riccardo 1
- Matteo Rinaldi 1
- Michael Roth 1
- Chiara Sacco 1
- Daniel Scalena 1
- Stefanie Schmidt 1
- Frauke Schultze-Lutter 1
- Andrea Sensi 1
- Luca Soldaini 1
- Alberto M. Urdiales 1