Angelo Mario Del Grosso

Also published as: Angelo Mario Del Grosso


2026

In the context of evolving European and national policies for research infrastructure governance, this paper presents the contribution of a national consortium for language resources and technology to the construction of a national infrastructure for FAIR and interoperable language and cultural data within a broader Humanities and Heritage Open Science initiative. As the national node of a European research infrastructure for language resources, the consortium contributes to translating FAIR and Open Science principles into practice by integrating technical, methodological, and training dimensions. Its activities combine several coordinated components: FAIRification workflows and ontology-based metadata mediation to enhance semantic interoperability across infrastructures; the refactoring and exposure of services through a federated API gateway; and the implementation of a Linguistic Linked Open Data (LLOD) pilot for the validation, transformation, and publication of interoperable RDF datasets. A national training ecosystem — comprising a training platform and a FAIR learning library — supports capacity building and the creation of FAIR-by-design learning materials. Finally, a permanent research observatory monitors community practices and needs, providing evidence-based insights for the continuous improvement of services and training provision. Together, these components demonstrate a coherent strategy for implementing FAIR and Open Science at the national level, while ensuring alignment with major European and national initiatives in the SSH data ecosystem.
This paper addresses a computational philology task focused on the automatic restoration of textual gaps (i.e., lacunae) in the Herculaneum Papyri, whose Ancient Greek texts are inherently fragmentary due to damage caused by carbonization. The objective of this work is to show the preliminary results concerning the development of a web-based suggestion service for proposing plausible supplements to fill lacunae, thereby supporting the philological process of producing new critical editions within a new web-based digital scholarly editing environment. To automatically provide such suggestions, we have developed systems that generate textual supplements in Ancient Greek, employing both neural (BERT-like) and statistical (n-gram) language modeling approaches.

2025

2024

In Nazi concentration camps, approximately 20 million people perished. This included young and old, men and women, Jews, dissidents, and homosexuals. Only 10% of those deported survived. This paper introduces “Voci dall’Inferno” project, which aims to achieve two key objectives: a) Create a comprehensive digital archive: by encoding a corpus of non-literary testimonies including both written and oral sources. b) Analyze the use of Dante’s language: by identifying the presence of Dante’s lexicon and allusions. Currently, the project holds 47 testimonies, with 29 transcribed in full text and 18 encoded using the XML-TEI format. This project is propelled by a multidisciplinary and educational context with experts in humanities and computer science. The project’s findings will be disseminated through a user-friendly web application built on an XML foundation. Though currently in its prototyping phase, the application boasts several features, including a search engine for testimonies, terms, or phrases within the corpus. Additionally, a browsing interface allows users to read and listen the original testimonies, while a visualization tool enables deeper exploration of the corpus’s content. Adhering to the Text Encoding Initiative (TEI) guidelines, the project ensures a structured digital archive, aligned with the FAIR principles for data accessibility and reusability.

2014

In the last few years the amount of manuscripts digitized and made available on the Web has been constantly increasing. However, there is still a considarable lack of results concerning both the explicitation of their content and the tools developed to make it available. The objective of the Clavius on the Web project is to develop a Web platform exposing a selection of Christophorus Clavius letters along with three different levels of analysis: linguistic, lexical and semantic. The multilayered annotation of the corpus involves a XML-TEI encoding followed by a tokenization step where each token is univocally identified through a CTS urn notation and then associated to a part-of-speech and a lemma. The text is lexically and semantically annotated on the basis of a lexicon and a domain ontology, the former structuring the most relevant terms occurring in the text and the latter representing the domain entities of interest (e.g. people, places, etc.). Moreover, each entity is connected to linked and non linked resources, including DBpedia and VIAF. Finally, the results of the three layers of analysis are gathered and shown through interactive visualization and storytelling techniques. A demo version of the integrated architecture was developed.