Abstract
Enabling users of intelligent systems to enhance the system performance by providing feedback on their errors is an important need. However, the ability of systems to learn from user feedback is difficult to evaluate in an objective and comparative way. Indeed, the involvement of real users in the adaptation process is an impediment to objective evaluation. This issue can be solved by using an oracle approach, where users are simulated by oracles having access to the reference test data. Another difficulty is to find a meaningful metric despite the fact that system improvements depend on the feedback provided and on the system itself. A solution is to measure the minimal amount of information needed to correct all system errors. It can be shown that for any well defined non interactive task, the interactively supervised version of the task can be evaluated by combining such an oracle-based approach and a minimum supervision rate metric. This new evaluation protocol for adaptive systems is not only expected to drive progress for such systems, but also to pave the way for a specialisation of actors along the value chain of their technological development.- Anthology ID:
- L16-1039
- Volume:
- Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
- Month:
- May
- Year:
- 2016
- Address:
- Portorož, Slovenia
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association (ELRA)
- Note:
- Pages:
- 256–260
- Language:
- URL:
- https://aclanthology.org/L16-1039
- DOI:
- Cite (ACL):
- Edouard Geoffrois. 2016. Evaluating Interactive System Adaptation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 256–260, Portorož, Slovenia. European Language Resources Association (ELRA).
- Cite (Informal):
- Evaluating Interactive System Adaptation (Geoffrois, LREC 2016)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/L16-1039.pdf