2024
pdf
abs
Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Miaoran Li
|
Baolin Peng
|
Michel Galley
|
Jianfeng Gao
|
Zhu Zhang
Findings of the Association for Computational Linguistics: NAACL 2024
Fact-checking is an essential task in NLP that is commonly utilized to validate the factual accuracy of a piece of text. Previous approaches mainly involve the resource-intensive process of fine-tuning pre-trained language models on specific datasets. In addition, there is a notable gap in datasets that focus on fact-checking texts generated by large language models (LLMs). In this paper, we introduce Self-Checker, a plug-and-play framework that harnesses LLMs for efficient and rapid fact-checking in a few-shot manner. We also present the BingCheck dataset, specifically designed for fact-checking texts generated by LLMs. Empirical results demonstrate the potential of Self-Checker in the use of LLMs for fact-checking. Compared to state-of-the-art fine-tuned models, there is still significant room for improvement, indicating that adopting LLMs could be a promising direction for future fact-checking research.
2021
pdf
abs
RADDLE: An Evaluation Benchmark and Analysis Platform for Robust Task-oriented Dialog Systems
Baolin Peng
|
Chunyuan Li
|
Zhu Zhang
|
Chenguang Zhu
|
Jinchao Li
|
Jianfeng Gao
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
For task-oriented dialog systems to be maximally useful, it must be able to process conversations in a way that is (1) generalizable with a small number of training examples for new task domains, and (2) robust to user input in various styles, modalities, or domains. In pursuit of these goals, we introduce the RADDLE benchmark, a collection of corpora and tools for evaluating the performance of models across a diverse set of domains. By including tasks with limited training data, RADDLE is designed to favor and encourage models with a strong generalization ability. RADDLE also includes a diagnostic checklist that facilitates detailed robustness analysis in aspects such as language variations, speech errors, unseen entities, and out-of-domain utterances. We evaluate recent state-of-the-art systems based on pre-training and fine-tuning, and find that grounded pre-training on heterogeneous dialog corpora performs better than training a separate model per domain. Adversarial training is also proposed to improve model robustness against noisy inputs. Overall, existing models are less than satisfactory in robustness evaluation, which suggests opportunities for future improvement.
2019
pdf
bib
Proceedings of the Second Workshop on Economics and Natural Language Processing
Udo Hahn
|
Véronique Hoste
|
Zhu Zhang
Proceedings of the Second Workshop on Economics and Natural Language Processing
2018
pdf
abs
To Attend or not to Attend: A Case Study on Syntactic Structures for Semantic Relatedness
Amulya Gupta
|
Zhu Zhang
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
With the recent success of Recurrent Neural Networks (RNNs) in Machine Translation (MT), attention mechanisms have become increasingly popular. The purpose of this paper is two-fold; firstly, we propose a novel attention model on Tree Long Short-Term Memory Networks (Tree-LSTMs), a tree-structured generalization of standard LSTM. Secondly, we study the interaction between attention and syntactic structures, by experimenting with three LSTM variants: bidirectional-LSTMs, Constituency Tree-LSTMs, and Dependency Tree-LSTMs. Our models are evaluated on two semantic relatedness tasks: semantic relatedness scoring for sentence pairs (SemEval 2012, Task 6 and SemEval 2014, Task 1) and paraphrase detection for question pairs (Quora, 2017).
2005
pdf
Mining Inter-Entity Semantic Relations Using Improved Transductive Learning
Zhu Zhang
Second International Joint Conference on Natural Language Processing: Full Papers
pdf
Tense Tagging for Verbs in Cross-Lingual Context: A Case Study
Yang Ye
|
Zhu Zhang
Second International Joint Conference on Natural Language Processing: Full Papers
2004
pdf
abs
CST Bank: A Corpus for the Study of Cross-document Structural Relationships
Dragomir Radev
|
Jahna Otterbacher
|
Zhu Zhang
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)
Clusters of multiple news stories related to the same topic exhibit a number of interesting properties. For example, when documents have been published at various points in time or by different authors or news agencies, one finds many instances of paraphrasing, information overlap and even contradiction. The current paper presents the Cross-document Structure Theory (CST) Bank, a collection of multi-document clusters in which pairs of sentences from different documents have been annotated for cross-document structure theory relationships. We will describe how we built the corpus, including our method for reducing the number of sentence pairs to be annotated by our hired judges, using lexical similarity measures. Finally, we will describe how CST and the CST Bank can be applied to different research areas such as multi-document summarization.
pdf
MEAD - A Platform for Multidocument Multilingual Text Summarization
Dragomir Radev
|
Timothy Allison
|
Sasha Blair-Goldensohn
|
John Blitzer
|
Arda Çelebi
|
Stanko Dimitrov
|
Elliott Drabek
|
Ali Hakim
|
Wai Lam
|
Danyu Liu
|
Jahna Otterbacher
|
Hong Qi
|
Horacio Saggion
|
Simone Teufel
|
Michael Topper
|
Adam Winkel
|
Zhu Zhang
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)
2002
pdf
Extraposition: A Case Study in German Sentence Realization
Michael Gamon
|
Eric Ringger
|
Zhu Zhang
|
Robert Moore
|
Simon Corston-Oliver
COLING 2002: The 19th International Conference on Computational Linguistics
2001
pdf
NewsInEssence: A System For Domain-Independent, Real-Time News Clustering and Multi-Document Summarization
Dragomir R. Radev
|
Sasha Blair-Goldensohn
|
Zhu Zhang
|
Revathi Sundara Raghavan
Proceedings of the First International Conference on Human Language Technology Research