Evaluation of Lifelong Learning Systems

Yevhenii Prokopalo, Sylvain Meignier, Olivier Galibert, Loic Barrault, Anthony Larcher


Abstract
Current intelligent systems need the expensive support of machine learning experts to sustain their performance level when used on a daily basis. To reduce this cost, i.e. remaining free from any machine learning expert, it is reasonable to implement lifelong (or continuous) learning intelligent systems that will continuously adapt their model when facing changing execution conditions. In this work, the systems are allowed to refer to human domain experts who can provide the system with relevant knowledge about the task. Nowadays, the fast growth of lifelong learning systems development rises the question of their evaluation. In this article we propose a generic evaluation methodology for the specific case of lifelong learning systems. Two steps will be considered. First, the evaluation of human-assisted learning (including active and/or interactive learning) outside the context of lifelong learning. Second, the system evaluation across time, with propositions of how a lifelong learning intelligent system should be evaluated when including human assisted learning or not.
Anthology ID:
2020.lrec-1.226
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
1833–1841
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.226
DOI:
Bibkey:
Cite (ACL):
Yevhenii Prokopalo, Sylvain Meignier, Olivier Galibert, Loic Barrault, and Anthony Larcher. 2020. Evaluation of Lifelong Learning Systems. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1833–1841, Marseille, France. European Language Resources Association.
Cite (Informal):
Evaluation of Lifelong Learning Systems (Prokopalo et al., LREC 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.lrec-1.226.pdf