Lorenza Romano

Also published as: L. Romano


2010

pdf
SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals
Iris Hendrickx | Su Nam Kim | Zornitsa Kozareva | Preslav Nakov | Diarmuid Ó Séaghdha | Sebastian Padó | Marco Pennacchiotti | Lorenza Romano | Stan Szpakowicz
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf
BART: A Multilingual Anaphora Resolution System
Samuel Broscheit | Massimo Poesio | Simone Paolo Ponzetto | Kepa Joseba Rodriguez | Lorenza Romano | Olga Uryupina | Yannick Versley | Roberto Zanoli
Proceedings of the 5th International Workshop on Semantic Evaluation

2009

pdf
SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations Between Pairs of Nominals
Iris Hendrickx | Su Nam Kim | Zornitsa Kozareva | Preslav Nakov | Diarmuid Ó Séaghdha | Sebastian Padó | Marco Pennacchiotti | Lorenza Romano | Stan Szpakowicz
Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)

2007

pdf
FBK-IRST: Kernel Methods for Semantic Relation Extraction
Claudio Giuliano | Alberto Lavelli | Daniele Pighin | Lorenza Romano
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2006

pdf
Exploiting Shallow Linguistic Information for Relation Extraction from Biomedical Literature
Claudio Giuliano | Alberto Lavelli | Lorenza Romano
11th Conference of the European Chapter of the Association for Computational Linguistics

pdf
Investigating a Generic Paraphrase-Based Approach for Relation Extraction
Lorenza Romano | Milen Kouylekov | Idan Szpektor | Ido Dagan | Alberto Lavelli
11th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Simple Information Extraction (SIE): A Portable and Effective IE System
Claudio Giuliano | Alberto Lavelli | Lorenza Romano
Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006)

pdf
I-CAB: the Italian Content Annotation Bank
B. Magnini | E. Pianta | C. Girardi | M. Negri | L. Romano | M. Speranza | V. Bartalesi Lenzi | R. Sprugnoli
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we present work in progress for the creation of the Italian Content Annotation Bank (I-CAB), a corpus of Italian news annotated with semantic information at different levels. The first level is represented by temporal expressions, the second level is represented by different types of entities (i.e. person, organizations, locations and geo-political entities), and the third level is represented by relations between entities (e.g. the affiliation relation connecting a person to an organization). So far I-CAB has been manually annotated with temporal expressions, person entities and organization entities. As we intend I-CAB to become a benchmark for various automatic Information Extraction tasks, we followed a policy of reusing already available markup languages. In particular, we adopted the annotation schemes developed for the ACE Entity Detection and Time Expressions Recognition and Normalization tasks. As the ACE guidelines have originally been developed for English, part of the effort consisted in adapting them to the specific morpho-syntactic features of Italian. Finally, we have extended them to include a wider range of entities, such as conjunctions.

2004

pdf
A Critical Survey of the Methodology for IE Evaluation
A. Lavelli | M. E. Califf | F. Ciravegna | D. Freitag | C. Giuliano | N. Kushmerick | L. Romano
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

We survey the evaluation methodology adopted in Information Extraction (IE), as defined in the MUC conferences and in later independent efforts applying machine learning to IE. We point out a number of problematic issues that may hamper the comparison between results obtained by different researchers. Some of them are common to other NLP tasks: e.g., the difficulty of exactly identifying the effects on performance of the data (sample selection and sample size), of the domain theory (features selected), and of algorithm parameter settings. Issues specific to IE evaluation include: how leniently to assess inexact identification of filler boundaries, the possibility of multiple fillers for a slot, and how the counting is performed. We argue that, when specifying an information extraction task, a number of characteristics should be clearly defined. However, in the papers only a few of them are usually explicitly specified. Our aim is to elaborate a clear and detailed experimental methodology and propose it to the IE community. The goal is to reach a widespread agreement on such proposal so that future IE evaluations will adopt the proposed methodology, making comparisons between algorithms fair and reliable. In order to achieve this goal, we will develop and make available to the community a set of tools and resources that incorporate a standardized IE methodology.