Zijie Wang
2024
Interpreting Answers to Yes-No Questions in Dialogues from Multiple Domains
Zijie Wang
|
Farzana Rashid
|
Eduardo Blanco
Findings of the Association for Computational Linguistics: NAACL 2024
People often answer yes-no questions without explicitly saying yes, no, or similar polar key-words. Figuring out the meaning of indirectanswers is challenging, even for large language models. In this paper, we investigate this problem working with dialogues from multiple domains. We present new benchmarks in three diverse domains: movie scripts, tennis interviews, and airline customer service. We present an approach grounded on distant supervision and blended training to quickly adapt to a new dialogue domain. Experimental results show that our approach is never detrimental and yields F1 improvements as high as 11-34%.
Wordflow: Social Prompt Engineering for Large Language Models
Zijie Wang
|
Aishwarya Chakravarthy
|
David Munechika
|
Duen Horng Chau
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Large language models (LLMs) require well-crafted prompts for effective use. Prompt engineering, the process of designing prompts, is challenging, particularly for non-experts who are less familiar with AI technologies. While researchers have proposed techniques and tools to assist LLM users in prompt design, these works primarily target AI application developers rather than non-experts. To address this research gap, we propose social prompt engineering, a novel paradigm that leverages social computing techniques to facilitate collaborative prompt design. To investigate social prompt engineering, we introduce Wordflow, an open-source and social text editor that enables everyday users to easily create, run, share, and discover LLM prompts. Additionally, by leveraging modern web technologies, Wordflow allows users to run LLMs locally and privately in their browsers. Two usage scenarios highlight how social prompt engineering and our tool can enhance laypeople’s interaction with LLMs. Wordflow is publicly accessible at https://poloclub.github.io/wordflow.
2023
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Zijie Wang
|
Md Hossain
|
Shivam Mathur
|
Terry Melo
|
Kadir Ozler
|
Keun Park
|
Jacob Quintero
|
MohammadHossein Rezaei
|
Shreya Shakya
|
Md Uddin
|
Eduardo Blanco
Findings of the Association for Computational Linguistics: EMNLP 2023
Yes-no questions expect a yes or no for an answer, but people often skip polar keywords. Instead, they answer with long explanations that must be interpreted. In this paper, we focus on this challenging problem and release new benchmarks in eight languages. We present a distant supervision approach to collect training data, and demonstrate that direct answers (i.e., with polar keywords) are useful to train models to interpret indirect answers (i.e., without polar keywords). We show that monolingual fine-tuning is beneficial if training data can be obtained via distant supervision for the language of interest (5 languages). Additionally, we show that cross-lingual fine-tuning is always beneficial (8 languages).
Search
Co-authors
- Eduardo Blanco 2
- Farzana Rashid 1
- Md. Hossain 1
- Shivam Mathur 1
- Terry Melo 1
- show all...