This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
YasuoAriki
Fixing paper assignments
Please select all papers that do not belong to this person.
Indicate below which author they should be assigned to.
A proactive dialogue system refers to a conversational system designed to guide the direction of a conversation in order to achieve pre-defined targets or fulfill specific goals. Recent studies have shown that Proactive Chain-of-Thought, which guides the system to explicitly think through intermediate reasoning and action-planning steps toward a conversational goal before generating a response, can significantly enhance the performance of proactive dialogue systems. However, these improvements primarily focus on prompt-based control, while the potential of fine-tuning Proactive-CoT remains largely unexplored. Furthermore, fine-tuning Proactive-CoT requires manual annotation of reasoning processes and action plans, which incurs significant time and cost. In this study, we propose a novel approach for automatically annotating reasoning processes and action plans through self-learning. This method enables fully automated annotation, significantly reducing the time and cost associated with manual annotation. Experimental results show that models trained using our proposed method outperform those trained with other fine-tuning approaches. These findings highlight the potential of self-learning approaches to advance the development of more robust and efficient proactive dialogue systems.
Fact-checking involves searching for relevant evidence and determining whether the given claim contains any misinformation. In this paper, we propose a fact verification system based on RAG-Fusion. We use GPT-4o to generate questions from the claim, which helps improve the accuracy of evidence retrieval.Additionally, we adopt GPT-4o for the final judgment module and refine the prompts to enhance the detection accuracy, particularly when the claim contains misinformation. Experiment showed that our system achieved an AVeriTeC Score of 0.3865 on the AVeriTeC test data, significantly surpassing the baseline score of 0.11.
In recent years, generation-based dialogue systems using state-of-the-art (SoTA) transformer-based models have demonstrated impressive performance in simulating human-like conversations. To improve the coherence and knowledge utilization capabilities of dialogue systems, knowledge-based dialogue systems integrate retrieved graph knowledge into transformer-based models. However, knowledge-based dialog systems sometimes generate responses without using the retrieved knowledge. In this work, we propose a method in which the knowledge-based dialogue system can constantly utilize the retrieved knowledge using text infilling . Text infilling is the task of predicting missing spans of a sentence or paragraph. We utilize this text infilling to enable dialog systems to fill incomplete responses with the retrieved knowledge. Our proposed dialogue system has been proven to generate significantly more correct responses than baseline dialogue systems.