Luis Fernando D’Haro

Also published as: Luis F. d’Haro, Luis Fernando D’Haro


2024

pdf
Unveiling the Achilles’ Heel of NLG Evaluators: A Unified Adversarial Framework Driven by Large Language Models
Yiming Chen | Chen Zhang | Danqing Luo | Luis Fernando D’Haro | Robby Tan | Haizhou Li
Findings of the Association for Computational Linguistics ACL 2024

The automatic evaluation of natural language generation (NLG) systems presents a long-lasting challenge. Recent studies have highlighted various neural metrics that align well with human evaluations. Yet, the robustness of these evaluators against adversarial perturbations remains largely under-explored due to the unique challenges in obtaining adversarial data for different NLG evaluation tasks. To address the problem, we introduce AdvEval, a novel black-box adversarial framework against NLG evaluators. AdvEval is specially tailored to generate data that yield strong disagreements between human and victim evaluators. Specifically, inspired by the recent success of large language models (LLMs) in text generation and evaluation, we adopt strong LLMs as both the data generator and gold evaluator. Adversarial data are automatically optimized with feedback from the gold and victim evaluator. We conduct experiments on 12 victim evaluators and 11 NLG datasets, spanning tasks including dialogue, summarization, and question evaluation. The results show that AdvEval can lead to significant performance degradation of various victim metrics, thereby validating its efficacy.

2023

pdf
Overview of Robust and Multilingual Automatic Evaluation Metricsfor Open-Domain Dialogue Systems at DSTC 11 Track 4
Mario Rodríguez-Cantelar | Chen Zhang | Chengguang Tang | Ke Shi | Sarik Ghazarian | João Sedoc | Luis Fernando D’Haro | Alexander I. Rudnicky
Proceedings of The Eleventh Dialog System Technology Challenge

The advent and fast development of neural networks have revolutionized the research on dialogue systems and subsequently have triggered various challenges regarding their automatic evaluation. Automatic evaluation of open-domain dialogue systems as an open challenge has been the center of the attention of many researchers. Despite the consistent efforts to improve automatic metrics’ correlations with human evaluation, there have been very few attempts to assess their robustness over multiple domains and dimensions. Also, their focus is mainly on the English language. All of these challenges prompt the development of automatic evaluation metrics that are reliable in various domains, dimensions, and languages. This track in the 11th Dialogue System Technology Challenge (DSTC11) is part of the ongoing effort to promote robust and multilingual automatic evaluation metrics. This article describes the datasets and baselines provided to participants and discusses the submission and result details of the two proposed subtasks.

2022

pdf
FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation
Chen Zhang | Luis Fernando D’Haro | Qiquan Zhang | Thomas Friedrichs | Haizhou Li
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality dimensions at the dialogue level. To this end, we are motivated to propose a multi-dimensional dialogue-level metric, which consists of three sub-metrics with each targeting a specific dimension. The sub-metrics are trained with novel self-supervised objectives and exhibit strong correlations with human judgment for their respective dimensions. Moreover, we explore two approaches to combine the sub-metrics: metric ensemble and multitask learning. Both approaches yield a holistic metric that significantly outperforms individual sub-metrics. Compared to the existing state-of-the-art metric, the combined metrics achieve around 16% relative improvement on average across three high-quality dialogue-level evaluation benchmarks.

2021

pdf
DynaEval: Unifying Turn and Dialogue Level Evaluation
Chen Zhang | Yiming Chen | Luis Fernando D’Haro | Yan Zhang | Thomas Friedrichs | Grandee Lee | Haizhou Li
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

A dialogue is essentially a multi-turn interaction among interlocutors. Effective evaluation metrics should reflect the dynamics of such interaction. Existing automatic metrics are focused very much on the turn-level quality, while ignoring such dynamics. To this end, we propose DynaEval, a unified automatic evaluation framework which is not only capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue. In DynaEval, the graph convolutional network (GCN) is adopted to model a dialogue in totality, where the graph nodes denote each individual utterance and the edges represent the dependency between pairs of utterances. A contrastive loss is then applied to distinguish well-formed dialogues from carefully constructed negative samples. Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model, and correlates strongly with human judgements across multiple dialogue evaluation aspects at both turn and dialogue level.

2018

pdf
Attention-based Semantic Priming for Slot-filling
Jiewen Wu | Rafael E. Banchs | Luis Fernando D’Haro | Pavitra Krishnaswamy | Nancy Chen
Proceedings of the Seventh Named Entities Workshop

The problem of sequence labelling in language understanding would benefit from approaches inspired by semantic priming phenomena. We propose that an attention-based RNN architecture can be used to simulate semantic priming for sequence labelling. Specifically, we employ pre-trained word embeddings to characterize the semantic relationship between utterances and labels. We validate the approach using varying sizes of the ATIS and MEDIA datasets, and show up to 1.4-1.9% improvement in F1 score. The developed framework can enable more explainable and generalizable spoken language understanding systems.

2015

pdf
RevUP: Automatic Gap-Fill Question Generation from Educational Texts
Girish Kumar | Rafael Banchs | Luis Fernando D’Haro
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications

2009

pdf
Speeding Up the Design of Dialogue Applications by Using Database Contents and Structure Information
Luis Fernando D’Haro | Ricardo de Cordoba | Juan Manuel Lucas | Roberto Barra-Chicote | Ruben San-Segundo
Proceedings of the SIGDIAL 2009 Conference

2007

pdf
A Multimodal Interface for Access to Content in the Home
Michael Johnston | Luis Fernando D’Haro | Michelle Levine | Bernard Renger
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf
Error Analysis of Statistical Machine Translation Output
David Vilar | Jia Xu | Luis Fernando D’Haro | Hermann Ney
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Evaluation of automatic translation output is a difficult task. Several performance measures like Word Error Rate, Position Independent Word Error Rate and the BLEU and NIST scores are widely use and provide a useful tool for comparing different systems and to evaluate improvements within a system. However the interpretation of all of these measures is not at all clear, and the identification of the most prominent source of errors in a given system using these measures alone is not possible. Therefore some analysis of the generated translations is needed in order to identify the main problems and to focus the research efforts. This area is however mostly unexplored and few works have dealt with it until now. In this paper we will present a framework for classification of the errors of a machine translation system and we will carry out an error analysis of the system used by the RWTH in the first TC-STAR evaluation.

2004

pdf
Semi-Automatic Generation of Dialogue Applications in the GEMINI Project
Stefan Hamerich | Volker Schubert | Volker Schless | Ricardo de Córdoba | José M. Pardo | Luis F. d’Haro | Basilis Kladis | Otilia Kocsis | Stefan Igel
Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004