Olga Babko-Malaya


2012

pdf bib
Identifying Nuggets of Information in GALE Distillation Evaluation
Olga Babko-Malaya | Greg Milette | Michael Schneider | Sarah Scogin
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper describes an approach to automatic nuggetization and implemented system employed in GALE Distillation evaluation to measure the information content of text returned in response to an open-ended question. The system identifies nuggets, or atomic units of information, categorizes them according to their semantic type, and selects different types of nuggets depending on the type of the question. We further show how this approach addresses the main challenges for using automatic nuggetization for QA evaluation: the variability of relevant nuggets and their dependence on the question. Specifically, we propose a template-based approach to nuggetization, where different semantic categories of nuggets are extracted dependent on the template of a question. During evaluation, human annotators judge each snippet returned in response to a query as relevant or irrelevant, whereas automatic template-based nuggetization is further used to identify the semantic units of information that people would have selected as ‘relevant' or ‘irrelevant' nuggets for a given query. Finally, the paper presents the performance results of the nuggetization system which compare the number of automatically generated nuggets and human nuggets and show that our automatic nuggetization is consistent with human judgments.

2010

pdf bib
Evaluation of Document Citations in Phase 2 Gale Distillation
Olga Babko-Malaya | Dan Hunter | Connie Fournelle | Jim White
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The focus of information retrieval evaluations, such as NIST’s TREC evaluations (e.g. Voorhees 2003), is on evaluation of the information content of system responses. On the other hand, retrieval tasks usually involve two different dimensions: reporting relevant information and providing sources of information, including corroborating evidence and alternative documents. Under the DARPA Global Autonomous Language Exploitation (GALE) program, Distillation provides succinct, direct responses to the formatted queries using the outputs of automated transcription and translation technologies. These responses are evaluated in two dimensions: information content, which measures the amount of relevant and non-redundant information, and document support, which measures the number of alternative sources provided in support of reported information. The final metric in the overall GALE distillation evaluation combines the results of scoring of both query responses and document citations. In this paper, we describe our evaluation framework with emphasis on the scoring of document citations and an analysis of how systems perform at providing sources of information.

2008

pdf bib
Annotation of Nuggets and Relevance in GALE Distillation Evaluation
Olga Babko-Malaya
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper presents an approach to annotation that BAE Systems has employed in the DARPA GALE Phase 2 Distillation evaluation. The purpose of the GALE Distillation evaluation is to quantify the amount of relevant and non-redundant information a distillation engine is able to produce in response to a specific, formatted query; and to compare that amount of information to the amount of information gathered by a bilingual human using commonly available state-of-the-art tools. As part of the evaluation, following NIST evaluation methodology of complex question answering (Voorhees, 2003), human annotators were asked to establish the relevancy of responses as well as the presence of atomic facts or information units, called nuggets of information. This paper discusses various challenges to the annotation of nuggets, called nuggetization, which include interaction between the granularity of nuggets and relevancy of these nuggets to the query in question. The approach proposed in the paper views nuggetization as a procedural task and allows annotators to revisit nuggetization based on the requirements imposed by the relevancy guidelines defined with a specific end-user in mind. This approach is shown in the paper to produce consistent annotations with high inter-annotator agreement scores.

pdf bib
A Pilot Arabic Propbank
Martha Palmer | Olga Babko-Malaya | Ann Bies | Mona Diab | Mohamed Maamouri | Aous Mansouri | Wajdi Zaghouani
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

In this paper, we present the details of creating a pilot Arabic proposition bank (Propbank). Propbanks exist for both English and Chinese. However the morphological and syntactic expression of linguistic phenomena in Arabic yields a very different type of process in creating an Arabic propbank. Hence, we highlight those characteristics of Arabic that make creating a propbank for the language a different challenge compared to the creation of an English Propbank.We believe that many of the lessons learned in dealing with Arabic could generalise to other languages that exhibit equally rich morphology and relatively free word order.

2006

pdf bib
Issues in Synchronizing the English Treebank and PropBank
Olga Babko-Malaya | Ann Bies | Ann Taylor | Szuting Yi | Martha Palmer | Mitch Marcus | Seth Kulick | Libin Shen
Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006

pdf bib
Semantic Interpretation of Unrealized Syntactic Material in LTAG
Olga Babko-Malaya
Proceedings of the Eighth International Workshop on Tree Adjoining Grammar and Related Formalisms

2005

pdf bib
A Parallel Proposition Bank II for Chinese and English
Martha Palmer | Nianwen Xue | Olga Babko-Malaya | Jinying Chen | Benjamin Snyder
Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky

2004

pdf bib
Proposition Bank II: Delving Deeper
Olga Babko-Malaya | Martha Palmer | Nianwen Xue | Aravind Joshi | Seth Kulick
Proceedings of the Workshop Frontiers in Corpus Annotation at HLT-NAACL 2004

pdf bib
Different Sense Granularities for Different Applications
Martha Palmer | Olga Babko-Malaya | Hoa Trang Dang
Proceedings of the 2nd International Workshop on Scalable Natural Language Understanding (ScaNaLU 2004) at HLT-NAACL 2004

pdf bib
LTAG Semantics of Focus
Olga Babko-Malaya
Proceedings of the 7th International Workshop on Tree Adjoining Grammar and Related Formalisms

pdf bib
LTAG Semantics of NP-Coordination
Olga Babko-Malaya
Proceedings of the 7th International Workshop on Tree Adjoining Grammar and Related Formalisms

pdf bib
LTAG Semantics for Questions
Maribel Romero | Laura Kallmeyer | Olga Babko-Malaya
Proceedings of the 7th International Workshop on Tree Adjoining Grammar and Related Formalisms