Linnea Micciulla


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2013

pdf bib
teragram: Rule-based detection of sentiment phrases using SAS Sentiment Analysis
Hilke Reckman | Cheyanne Baird | Jean Crawford | Richard Crowell | Linnea Micciulla | Saratendu Sethi | Fruzsina Veress
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

2006

pdf bib
A Study of Translation Edit Rate with Targeted Human Annotation
Matthew Snover | Bonnie Dorr | Rich Schwartz | Linnea Micciulla | John Makhoul
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers

We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We show that the single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU. We also define a human-targeted TER (or HTER) and show that it yields higher correlations with human judgments than BLEU—even when BLEU is given human-targeted references. Our results indicate that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate with human judgments as well as—or better than—a second human judgment does.

2005

pdf bib
A Methodology for Extrinsically Evaluating Information Extraction Performance
Michael Crystal | Alex Baron | Katherine Godfrey | Linnea Micciulla | Yvette Tenney | Ralph Weischedel
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing