Chris Wolf
2008
Adjudicator Agreement and System Rankings for Person Name Search
Mark Arehart
|
Chris Wolf
|
Keith J. Miller
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
We have analyzed system rankings for person name search algorithms using a data set for which several versions of ground truth were developed by employing different means of resolving adjudicator conflicts. Thirteen algorithms were ranked by F-score, using bootstrap resampling for significance testing, on a dataset containing 70,000 romanized names from various cultures. We found some disagreement among the four adjudicators, with kappa ranging from 0.57 to 0.78. Truth sets based on a single adjudicator, and on the intersection or union of positive adjudications produced sizeable variability in scoring sensitivity - and to a lesser degree rank order - compared to the consensus truth set. However, results on truth sets constructed by randomly choosing an adjudicator for each item were highly consistent with the consensus. The implication is that an evaluation where one adjudicator has judged each item is nearly as good as a more expensive and labor-intensive one where multiple adjudicators have judged each item and conflicts are resolved through voting.
An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems
Keith J. Miller
|
Mark Arehart
|
Catherine Ball
|
John Polk
|
Alan Rubenstein
|
Kenneth Samuel
|
Elizabeth Schroeder
|
Eva Vecchi
|
Chris Wolf
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the ground truth used to score the systems performance. Descriptions and snapshots of the labs various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.
Search
Co-authors
- Mark Arehart 2
- Keith J. Miller 2
- Catherine N. Ball 1
- John Polk 1
- Alan Rubenstein 1
- show all...
Venues
- lrec2