Martina Galletti
2026
NLP for Social Good: A Survey and Outlook of Challenges, Opportunities and Responsible Deployment
Antonia Karamolegkou | Angana Borah | Eunjung Cho | Sagnik Ray Choudhury | Martina Galletti | Pranav Gupta | Oana Ignat | Priyanka Kargupta | Neema Kotonya | Hemank Lamba | Sun-Joo Lee | Arushi Mangla | Ishani Mondal | Fatima Zahra Moudakir | Deniz Nazar | Poli Nemkova | Dina Pisarevskaya | Naquee Rizwan | Nazanin Sabri | Keenan Samway | Dominik Stammbach | Anna Steinberg Schulten | David Tomás | Steven R Wilson | Bowen Yi | Jessica H Zhu | Arkaitz Zubiaga | Anders Søgaard | Alexander Fraser | Zhijing Jin | Rada Mihalcea | Joel R. Tetreault | Daryna Dementieva
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Antonia Karamolegkou | Angana Borah | Eunjung Cho | Sagnik Ray Choudhury | Martina Galletti | Pranav Gupta | Oana Ignat | Priyanka Kargupta | Neema Kotonya | Hemank Lamba | Sun-Joo Lee | Arushi Mangla | Ishani Mondal | Fatima Zahra Moudakir | Deniz Nazar | Poli Nemkova | Dina Pisarevskaya | Naquee Rizwan | Nazanin Sabri | Keenan Samway | Dominik Stammbach | Anna Steinberg Schulten | David Tomás | Steven R Wilson | Bowen Yi | Jessica H Zhu | Arkaitz Zubiaga | Anders Søgaard | Alexander Fraser | Zhijing Jin | Rada Mihalcea | Joel R. Tetreault | Daryna Dementieva
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Natural language processing (NLP) now shapes many aspects of our world, yet its potential for positive social impact is underexplored. This paper surveys work in “NLP for Social Good" (NLP4SG) across nine domains relevant to global development and risk agendas, summarizing principal tasks and challenges. We analyze ACL Anthology trends, finding that inclusion and AI harms attract the most research, while domains such as poverty, peacebuilding, and environmental protection remain underexplored. Guided by our review, we outline opportunities for responsible and equitable NLP and conclude with a call for cross-disciplinary partnerships and human-centered approaches to ensure that future NLP technologies advance the public good.
2025
From End-Users to Co-Designers: Lessons from Teachers
Martina Galletti | Valeria Cesaroni
Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025)
Martina Galletti | Valeria Cesaroni
Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025)
This study presents a teacher-centered evaluation of an AI-powered reading comprehension tool, developed to support learners with language-based difficulties. Drawing on the Social Acceptance of Technology (SAT) framework, we investigate not only technical usability but also the pedagogical, ethical, and contextual dimensions of AI integration in classrooms. We explore how teachers perceive the platform’s alignment with inclusive pedagogies, instructional workflows, and professional values through a mixed-methods approach, including questionnaires and focus groups with educators. Findings a shift from initial curiosity to critical, practice-informed reflection, with trust, transparency, and adaptability emerging as central concerns. The study contributes a replicable evaluation framework and highlights the importance of engaging teachers as co-designers in the development of educational technologies.
Are Your Keywords Like My Queries? A Corpus-Wide Evaluation of Keyword Extractors with Real Searches
Martina Galletti | Giulio Prevedello | Emanuele Brugnoli | Donald Ruggiero Lo Sardo | Pietro Gravino
Proceedings of the 31st International Conference on Computational Linguistics
Martina Galletti | Giulio Prevedello | Emanuele Brugnoli | Donald Ruggiero Lo Sardo | Pietro Gravino
Proceedings of the 31st International Conference on Computational Linguistics
Keyword Extraction (KE) is essential in Natural Language Processing (NLP) for identifying key terms that represent the main themes of a text, and it is vital for applications such as information retrieval, text summarisation, and document classification. Despite the development of various KE methods — including statistical approaches and advanced deep learning models — evaluating their effectiveness remains challenging. Current evaluation metrics focus on keyword quality, balance, and overlap with annotations from authors and professional indexers, but neglect real-world information retrieval needs. This paper introduces a novel evaluation method designed to overcome this limitation by using real query data from Google Trends and can be used with both supervised and unsupervised KE approaches. We applied this method to three popular KE approaches (YAKE, RAKE and KeyBERT) and found that KeyBERT was the most effective in capturing users’ top queries, with RAKE also showing surprisingly good performance. The code is open-access and publicly available.
Automated Concept Map Extraction from Text
Martina Galletti | Inès Blin | Eleni Ilkou
Proceedings of the 5th Conference on Language, Data and Knowledge
Martina Galletti | Inès Blin | Eleni Ilkou
Proceedings of the 5th Conference on Language, Data and Knowledge
Concept Maps are semantic graph summary representations of relations between concepts in text. They are particularly beneficial for students with difficulty in reading comprehension, such as those with special educational needs and disabilities. Currently, the field of concept map extraction from text is outdated, relying on old baselines, limited datasets, and limited performances with F1 scores below 20%. We propose a novel neuro-symbolic pipeline and a GPT3.5-based method for automated concept map extraction from text evaluated over the WIKI dataset. The pipeline is a robust, modularized, and open-source architecture, the first to use semantic and neural techniques for automatic concept map extraction while also using a preliminary summarization component to reduce processing time and optimize computational resources. Furthermore, we investigate the large language model in zero-shot, one-shot, and decomposed prompting for concept map generation. Our approaches achieve state-of-the-art results in METEOR metrics, with F1 scores of 25.7 and 28.5, respectively, and in ROUGE-2 recall, with respective scores of 24.3 and 24.3. This contribution advances the task of automated concept map extraction from text, opening doors to wider applications such as education and speech-language therapy. The code is openly available.
2024
Search
Fix author
Co-authors
- Inès Blin 1
- Angana Borah 1
- Emanuele Brugnoli 1
- Valeria Cesaroni 1
- Eunjung Cho 1
- Sagnik Ray Choudhury 1
- Daryna Dementieva 1
- Alexander Fraser 1
- Pietro Gravino 1
- Pranav Gupta 1
- Oana Ignat 1
- Eleni Ilkou 1
- Zhijing Jin 1
- Antonia Karamolegkou 1
- Priyanka Kargupta 1
- Neema Kotonya 1
- Hemank Lamba 1
- Sun-Joo Lee 1
- Donald Ruggiero Lo Sardo 1
- Arushi Mangla 1
- Caterina Marchesi 1
- Rada Mihalcea 1
- Ishani Mondal 1
- Fatima Zahra Moudakir 1
- Daniele Nardi 1
- Deniz Nazar 1
- Poli Nemkova 1
- Francesca Padovani 1
- Eleonora Pasqua 1
- Dina Pisarevskaya 1
- Giulio Prevedello 1
- Naquee Rizwan 1
- Nazanin Sabri 1
- Keenan Samway 1
- Anna Steinberg Schulten 1
- Dominik Stammbach 1
- Anders Søgaard 1
- Joel Tetreault 1
- David Tomás 1
- Steven R Wilson 1
- Bowen Yi 1
- Jessica H Zhu 1
- Arkaitz Zubiaga 1