Abstract
Computer-based aids for writing assistance have been around since at least the early 1980s, focussing primarily on aspects such as spelling, grammar and style. The potential audience for such tools is very large indeed, and this is a clear case where we might expect to see language processing applications having a significant real-world impact. However, existing comparative evaluations of applications in this space are often no more than impressionistic and anecdotal reviews of commercial offerings as found in software magazines, making it hard to determine which approaches are superior. More rigorous evaluation in the scholarly literature has been held back in particular by the absence of shared datasets of texts marked-up with errors, and the lack of an agreed evaluation framework. Significant collections of publicly available data are now appearing; this paper describes a complementary evaluation framework, which has been piloted in the Helping Our Own shared task. The approach, which uses stand-off annotations for representing edits to text, can be used in a wide variety of text-correction tasks, and easily accommodates different error tagsets.- Anthology ID:
- L12-1267
- Volume:
- Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
- Month:
- May
- Year:
- 2012
- Address:
- Istanbul, Turkey
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association (ELRA)
- Note:
- Pages:
- 3015–3018
- Language:
- URL:
- http://www.lrec-conf.org/proceedings/lrec2012/pdf/490_Paper.pdf
- DOI:
- Cite (ACL):
- Robert Dale and George Narroway. 2012. A Framework for Evaluating Text Correction. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 3015–3018, Istanbul, Turkey. European Language Resources Association (ELRA).
- Cite (Informal):
- A Framework for Evaluating Text Correction (Dale & Narroway, LREC 2012)
- PDF:
- http://www.lrec-conf.org/proceedings/lrec2012/pdf/490_Paper.pdf