Edouard Geoffrois

Also published as: E. Geoffrois


2016

pdf
Evaluating Interactive System Adaptation
Edouard Geoffrois
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Enabling users of intelligent systems to enhance the system performance by providing feedback on their errors is an important need. However, the ability of systems to learn from user feedback is difficult to evaluate in an objective and comparative way. Indeed, the involvement of real users in the adaptation process is an impediment to objective evaluation. This issue can be solved by using an oracle approach, where users are simulated by oracles having access to the reference test data. Another difficulty is to find a meaningful metric despite the fact that system improvements depend on the feedback provided and on the system itself. A solution is to measure the minimal amount of information needed to correct all system errors. It can be shown that for any well defined non interactive task, the interactively supervised version of the task can be evaluated by combining such an oracle-based approach and a minimum supervision rate metric. This new evaluation protocol for adaptive systems is not only expected to drive progress for such systems, but also to pave the way for a specialisation of actors along the value chain of their technological development.

2008

pdf
An Economic View on Human Language Technology Evaluation
Edouard Geoffrois
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper analyses some general issues about human language technology evaluation, focusing on economic aspects. It first provides a scientific rationale for the need to organize evaluation in the form of campaigns, by relating this need to some basic characteristics of human language technologies, namely that they involve learning to process information in a way which reproduces human capabilities. It then reviews the benefits and constraints of these evaluation campaigns. Borrowing concepts from the field of economics, it also provides an analysis of the economic incentives to organize evaluation campaigns. It entails from this analysis that fitting evaluation campaigns to the needs of scientific research requires a strong implication in term of research policy and public funding.

2006

pdf
Corpus description of the ESTER Evaluation Campaign for the Rich Transcription of French Broadcast News
S. Galliano | E. Geoffrois | G. Gravier | J.-F. Bonastre | D. Mostefa | K. Choukri
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper presents the audio corpus developed in the framework of the ESTER evaluation campaign of French broadcast news transcription systems. This corpus includes 100 hours of manually annotated recordings and 1,677 hours of non transcribed data. The manual annotations include the detailed verbatim orthographic transcription, the speaker turns and identities, information about acoustic conditions, and name entities. Additional resources generated by automatic speech processing systems, such as phonetic alignments and word graphs, are also described.

2004

pdf
The ESTER Evaluation Campaign for the Rich Transcription of French Broadcast News
G. Gravier | J-F. Bonastre | E. Geoffrois | S. Galliano | K. McTait | K. Choukri
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2000

pdf
Transcribing with Annotation Graphs
Edouard Geoffrois | Claude Barras | Steven Bird | Zhibiao Wu
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)