Feifan Yan
2022
Seamlessly Integrating Factual Information and Social Content with Persuasive Dialogue
Maximillian Chen
|
Weiyan Shi
|
Feifan Yan
|
Ryan Hou
|
Jingwen Zhang
|
Saurav Sahay
|
Zhou Yu
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Complex conversation settings such as persuasion involve communicating changes in attitude or behavior, so users’ perspectives need to be addressed, even when not directly related to the topic. In this work, we contribute a novel modular dialogue system framework that seamlessly integrates factual information and social content into persuasive dialogue. Our framework is generalizable to any dialogue tasks that have mixed social and task contents. We conducted a study that compared user evaluations of our framework versus a baseline end-to-end generation model. We found our model was evaluated to be more favorable in all dimensions including competence and friendliness compared to the baseline model which does not explicitly handle social content or factual questions.
2021
LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing
Yu Li
|
Josh Arnold
|
Feifan Yan
|
Weiyan Shi
|
Zhou Yu
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations
We present LEGOEval, an open-source toolkit that enables researchers to easily evaluate dialogue systems in a few lines of code using the online crowdsource platform, Amazon Mechanical Turk. Compared to existing toolkits, LEGOEval features a flexible task design by providing a Python API that maps to commonly used React.js interface components. Researchers can personalize their evaluation procedures easily with our built-in pages as if playing with LEGO blocks. Thus, LEGOEval provides a fast, consistent method for reproducing human evaluation results. Besides the flexible task design, LEGOEval also offers an easy API to review collected data.
Search
Co-authors
- Weiyan Shi 2
- Zhou Yu 2
- Maximillian Chen 1
- Ryan Hou 1
- Jingwen Zhang 1
- show all...