Shayan Ray
2024
GrounDial: Human-norm Grounded Safe Dialog Response Generation
Siwon Kim
|
Shuyang Dai
|
Mohammad Kachuee
|
Shayan Ray
|
Tara Taghavi
|
Sungroh Yoon
Findings of the Association for Computational Linguistics: EACL 2024
Current conversational AI systems based on large language models (LLMs) are known to generate unsafe responses agreeing to offensive user input or including toxic content. Previous research aimed to alleviate the toxicity by fine-tuning LLM with manually annotated safe dialogue histories. However, the dependency on additional tuning requires substantial costs. To remove the dependency, we propose GrounDial, where response safety is achieved by grounding responses to commonsense social rules without requiring fine-tuning. A hybrid approach of in-context learning and human-norm-guided decoding of GrounDial enables the response to be quantitatively and qualitatively safer even without additional data or tuning.
LLM-based Frameworks for API Argument Filling in Task-Oriented Conversational Systems
Jisoo Mok
|
Mohammad Kachuee
|
Shuyang Dai
|
Shayan Ray
|
Tara Taghavi
|
Sungroh Yoon
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
Task-orientated conversational agents interact with users and assist them via leveraging external APIs. A typical task-oriented conversational system can be broken down into three phases: external API selection, argument filling, and response generation. The focus of our work is the task of argument filling, which is in charge of accurately providing arguments required by the selected API. Upon comprehending the dialogue history and the pre-defined API schema, the argument filling task is expected to provide the external API with the necessary information to generate a desirable agent action. In this paper, we study the application of Large Language Models (LLMs) for the problem of API argument filling task. Our initial investigation reveals that LLMs require an additional grounding process to successfully perform argument filling, inspiring us to design training and prompting frameworks to ground their responses. Our experimental results demonstrate that when paired with proposed techniques, the argument filling performance of LLMs noticeably improves, paving a new way toward building an automated argument filling framework.
Search
Co-authors
- Jisoo Mok 1
- Mohammad Kachuee 2
- Shuyang Dai 2
- Siwon Kim 1
- Sungroh Yoon 2
- show all...