Anna Krasnyanskaya
2006
SParseval: Evaluation Metrics for Parsing Speech
Brian Roark
|
Mary Harper
|
Eugene Charniak
|
Bonnie Dorr
|
Mark Johnson
|
Jeremy Kahn
|
Yang Liu
|
Mari Ostendorf
|
John Hale
|
Anna Krasnyanskaya
|
Matthew Lease
|
Izhak Shafran
|
Matthew Snover
|
Robin Stewart
|
Lisa Yung
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
While both spoken and written language processing stand to benefit from parsing, the standard Parseval metrics (Black et al., 1991) and their canonical implementation (Sekine and Collins, 1997) are only useful for text. The Parseval metrics are undefined when the words input to the parser do not match the words in the gold standard parse tree exactly, and word errors are unavoidable with automatic speech recognition (ASR) systems. To fill this gap, we have developed a publicly available tool for scoring parses that implements a variety of metrics which can handle mismatches in words and segmentations, including: alignment-based bracket evaluation, alignment-based dependency evaluation, and a dependency evaluation that does not require alignment. We describe the different metrics, how to use the tool, and the outcome of an extensive set of experiments on the sensitivity.
PCFGs with Syntactic and Prosodic Indicators of Speech Repairs
John Hale
|
Izhak Shafran
|
Lisa Yung
|
Bonnie J. Dorr
|
Mary Harper
|
Anna Krasnyanskaya
|
Matthew Lease
|
Yang Liu
|
Brian Roark
|
Matthew Snover
|
Robin Stewart
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics
Search
Co-authors
- Brian Roark 2
- Mary Harper 2
- Bonnie Dorr 2
- Yang Liu 2
- John Hale 2
- show all...