Yuanjie Wang
2023
Automatic Evaluate Dialogue Appropriateness by Using Dialogue Act
Bao Chen
|
Yuanjie Wang
|
Zeming Liu
|
Yuhang Guo
Findings of the Association for Computational Linguistics: EMNLP 2023
Evaluation of dialogue systems requires assessing various aspects, among which appropriateness holds significance as a core element of communicative language competence. However, current evaluations heavily rely on human judgments, which are time-consuming, labor-intensive, prone to biases, and lacking objectivity. In this paper, we introduce Dialogue Act Appropriateness (DAA), a novel method that utilizes the underlying patterns of dialogue act transitions to evaluate the appropriateness of chatbot responses. We learn transition patterns from human-human dialogue corpora, evaluating chatbot appropriateness by measuring the similarity of their transition patterns to those observed in human-human dialogues. To validate DAA, we annotate a test dataset by manually evaluating the appropriateness of dialogues from multiple chatbot systems. The experimental results demonstrate a strong correlation between our evaluation metric and human ratings, establishing the reliability of DAA as a measure of dialogue appropriateness.
MAP: Low-data Regime Multimodal Learning with Adapter-based Pre-training and Prompting
Wenyan Li
|
Dong Li
|
Wanjing Li
|
Yuanjie Wang
|
Hai Jie
|
Yiran Zhong
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)
Pretrained vision-language (VL) models have shown impressive results on various multi-modal downstream tasks recently. Many of the benchmark models build on pretrained causal language models (LMs), leveraging the original few-shot learning and generalization capability of the LMs trained with large text corpora. However, these models are often gigantic and require large-scale image and text data with high computational cost to train. This paper introduces a moderate-size model called MAP for efficient VL transfer learning through adapter-based pretraining and prompting. We aim to answer the question of how much we can complete through VL pretraining within the low-data regime while maximizing efficiency in transferring knowledge of a moderate-size frozen LM. Our experiments demonstrate that MAP achieves substantially better zero-shot and few-shot performance on downstream VL tasks with only 10% the size of pretraining data and a 30x lighter pretrained LM backbone compared to Frozen. MAP also outperforms fully trained models of comparable size at retaining its transfer learning ability when the amount of training data reduces.
2020
BIT’s system for the AutoSimTrans 2020
Minqin Li
|
Haodong Cheng
|
Yuanjie Wang
|
Sijia Zhang
|
Liting Wu
|
Yuhang Guo
Proceedings of the First Workshop on Automatic Simultaneous Translation
This paper describes our machine translation systems for the streaming Chinese-to-English translation task of AutoSimTrans 2020. We present a sentence length based method and a sentence boundary detection model based method for the streaming input segmentation. Experimental results of the transcription and the ASR output translation on the development data sets show that the translation system with the detection model based method outperforms the one with the length based method in BLEU score by 1.19 and 0.99 respectively under similar or better latency.
Search
Co-authors
- Yuhang Guo 2
- Bao Chen 1
- Zeming Liu 1
- Wenyan Li 1
- Dong Li 1
- show all...