This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
DavidHardcastle
Fixing paper assignments
Please select all papers that belong to the same person.
Indicate below which author they should be assigned to.
Patients require access to Electronic Patient Records, however medical language is often too difficult for patients to understand. Explaining records to patients is a time-consuming task, which we attempt to simplify by automating the translation procedure. This paper introduces a research project dealing with the automatic rewriting of medical narratives for the benefit of patients. We are looking at various ways in which technical language can be transposed into patient-friendly language by means of a comparison with patient information materials. The text rewriting procedure we describe could potentially have an impact on the quality of information delivered to patients. We report on some preliminary experiments concerning rewriting at lexical and paragaph level. This is an ongoing project which currently addresses a restricted number of issues, including target text modelling and text rewriting at lexical level.
Evaluating the output of NLG systems is notoriously difficult, and performing assessments of text quality even more so. A range of automated and subject-based approaches to the evaluation of text quality have been taken, including comparison with a putative gold standard text, analysis of specific linguistic features of the output, expert review and task-based evaluation. In this paper we present the results of a variety of such approaches in the context of a case study application. We discuss the problems encountered in the implementation of each approach in the context of the literature, and propose that a test based on the Turing test for machine intelligence offers a way forward in the evaluation of the subjective notion of text quality.