2024
pdf
abs
Bootstrapped Pre-training with Dynamic Identifier Prediction for Generative Retrieval
Yubao Tang
|
Ruqing Zhang
|
Jiafeng Guo
|
Maarten Rijke
|
Yixing Fan
|
Xueqi Cheng
Findings of the Association for Computational Linguistics ACL 2024
Generative retrieval uses differentiable search indexes to directly generate relevant document identifiers in response to a query. Recent studies have highlighted the potential of a strong generative retrieval model, trained with carefully crafted pre-training tasks, to enhance downstream retrieval tasks via fine-tuning. However, the full power of pre-training for generative retrieval remains underexploited due to its reliance on pre-defined static document identifiers, which may not align with evolving model parameters. In this work, we introduce BootRet, a bootstrapped pre-training method for generative retrieval that dynamically adjusts document identifiers during pre-training to accommodate the continuing memorization of the corpus. BootRet involves three key training phases: (i) initial identifier generation, (ii) pre-training via corpus indexing and relevance prediction tasks, and (iii) bootstrapping for identifier updates. To facilitate the pre-training phase, we further introduce noisy documents and pseudo-queries, generated by large language models, to resemble semantic connections in both indexing and retrieval tasks. Experimental results demonstrate that BootRet significantly outperforms existing pre-training generative retrieval baselines and performs well even in zero-shot settings.
2023
pdf
abs
生成式信息检索前沿进展与挑战(Challenges and Advances in Generative Information Retrieval)
Yixing Fan (意兴 范)
|
Yubao Tang (钰葆 唐)
|
Jiangui Chen (建贵 陈)
|
Ruqing Zhang (儒清 张)
|
Jiafeng Guo (嘉丰 郭)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum)
“信息检索(Information Retrieval, IR)旨在从大规模的语料集合中找到与用户查询相关的信息,已经成为人们解决日常工作和生活中问题的最重要工具之一。现有的IR系统主要依赖于“索引-召回-重排”的框架,将复杂的检索任务建模成多阶段耦合的搜索过程。这种解耦建模的方式,一方面提升了系统检索的效率,使得检索系统能够轻松应对数十亿的语料集合;另一方面也加重了系统架构的复杂性,无法实现端到端联合优化。为了应对这个问题,近年来研究人员开始探索利用一个统一的模型建模整个搜索过程,并提出了新的生成式信息检索范式,这种新的范式将整个语料集合编码到检索模型中,可以实现端到端优化,消除了检索系统对于外部索引的依赖。当前,生成式检索已经成为坉坒领域热门研究方向之一,研究人员提出了不同的方案来提升检索的效果,考虑到这个方向的快速进展,本文将对生成式信息检索进行系统的综述,包括基础概念,文档标识符和模型容量。此外,我们还讨论了一些未解决的挑战以及有前景的研究方向,希望能激发和促进更多关于这些主题的未来研究。”
pdf
abs
From Relevance to Utility: Evidence Retrieval with Feedback for Fact Verification
Hengran Zhang
|
Ruqing Zhang
|
Jiafeng Guo
|
Maarten de Rijke
|
Yixing Fan
|
Xueqi Cheng
Findings of the Association for Computational Linguistics: EMNLP 2023
Retrieval-enhanced methods have become a primary approach in fact verification (FV); it requires reasoning over multiple retrieved pieces of evidence to verify the integrity of a claim. To retrieve evidence, existing work often employs off-the-shelf retrieval models whose design is based on the probability ranking principle. We argue that, rather than relevance, for FV we need to focus on the utility that a claim verifier derives from the retrieved evidence. We introduce the feedback-based evidence retriever (FER) that optimizes the evidence retrieval process by incorporating feedback from the claim verifier. As a feedback signal we use the divergence in utility between how effectively the verifier utilizes the retrieved evidence and the ground-truth evidence to produce the final claim label. Empirical studies demonstrate the superiority of FER over prevailing baselines.
pdf
abs
Prompt Tuning with Contradictory Intentions for Sarcasm Recognition
Yiyi Liu
|
Ruqing Zhang
|
Yixing Fan
|
Jiafeng Guo
|
Xueqi Cheng
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Recently, prompt tuning has achieved promising results in a variety of natural language processing (NLP) tasks. The typical approach is to insert text pieces (i.e. templates) into the input and transform downstream tasks into the same form as pre-training. In essence, a high-quality template is the foundation of prompt tuning to support the performance of the converted cloze-style task. However, for sarcasm recognition, it is time-consuming and requires increasingly sophisticated domain knowledge to determine the appropriate templates and label words due to its highly figurative nature. In this work, we propose SarcPrompt, to incorporate the prior knowledge about contradictory intentions into prompt tuning for sarcasm recognition. SarcPrompt is inspired by that the speaker usually says the opposite of what they actually mean in the sarcastic text. Based on this idea, we explicitly mimic the actual intention by prompt construction and indicate whether the actual intention is contradictory to the literal content by verbalizer engineering. Experiments on three public datasets with standard and low-resource settings demonstrate the effectiveness of our SarcPrompt for sarcasm recognition.
2022
pdf
abs
Euphemism Detection by Transformers and Relational Graph Attention Network
Yuting Wang
|
Yiyi Liu
|
Ruqing Zhang
|
Yixing Fan
|
Jiafeng Guo
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
Euphemism is a type of figurative language broadly adopted in social media and daily conversations. People use euphemism for politeness or to conceal what they are discussing. Euphemism detection is a challenging task because of its obscure and figurative nature. Even humans may not agree on if a word expresses euphemism. In this paper, we propose to employ bidirectional encoder representations transformers (BERT), and relational graph attention network in order to model the semantic and syntactic relations between the target words and the input sentence. The best performing method of ours reaches a Macro-F1 score of 84.0 on the euphemism detection dataset of the third workshop on figurative language processing shared task 2022.
pdf
abs
Visual Named Entity Linking: A New Dataset and A Baseline
Wen Sun
|
Yixing Fan
|
Jiafeng Guo
|
Ruqing Zhang
|
Xueqi Cheng
Findings of the Association for Computational Linguistics: EMNLP 2022
Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of the proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https: //github.com/ict-bigdatalab/VNEL.
2018
pdf
abs
Learning to Control the Specificity in Neural Response Generation
Ruqing Zhang
|
Jiafeng Guo
|
Yixing Fan
|
Yanyan Lan
|
Jun Xu
|
Xueqi Cheng
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In conversation, a general response (e.g., “I don’t know”) could correspond to a large variety of input utterances. Previous generative conversational models usually employ a single model to learn the relationship between different utterance-response pairs, thus tend to favor general and trivial responses which appear frequently. To address this problem, we propose a novel controlled response generation mechanism to handle different utterance-response relationships in terms of specificity. Specifically, we introduce an explicit specificity control variable into a sequence-to-sequence model, which interacts with the usage representation of words through a Gaussian Kernel layer, to guide the model to generate responses at different specificity levels. We describe two ways to acquire distant labels for the specificity control variable in learning. Empirical studies show that our model can significantly outperform the state-of-the-art response generation models under both automatic and human evaluations.