2023
pdf
abs
Collective Human Opinions in Semantic Textual Similarity
Yuxia Wang
|
Shimin Tao
|
Ning Xie
|
Hao Yang
|
Timothy Baldwin
|
Karin Verspoor
Transactions of the Association for Computational Linguistics, Volume 11
Despite the subjective nature of semantic textual similarity (STS) and pervasive disagreements in STS annotation, existing benchmarks have used averaged human ratings as gold standard. Averaging masks the true distribution of human opinions on examples of low agreement, and prevents models from capturing the semantic vagueness that the individual ratings represent. In this work, we introduce USTS, the first Uncertainty-aware STS dataset with ∼15,000 Chinese sentence pairs and 150,000 labels, to study collective human opinions in STS. Analysis reveals that neither a scalar nor a single Gaussian fits a set of observed judgments adequately. We further show that current STS models cannot capture the variance caused by human disagreement on individual instances, but rather reflect the predictive confidence over the aggregate dataset.
pdf
abs
Multifaceted Challenge Set for Evaluating Machine Translation Performance
Xiaoyu Chen
|
Daimeng Wei
|
Zhanglin Wu
|
Ting Zhu
|
Hengchao Shang
|
Zongyao Li
|
Jiaxin Guo
|
Ning Xie
|
Lizhi Lei
|
Hao Yang
|
Yanfei Jiang
Proceedings of the Eighth Conference on Machine Translation
Machine Translation Evaluation is critical to Machine Translation research, as the evaluation results reflect the effectiveness of training strategies. As a result, a fair and efficient evaluation method is necessary. Many researchers have raised questions about currently available evaluation metrics from various perspectives, and propose suggestions accordingly. However, to our knowledge, few researchers has analyzed the difficulty level of source sentence and its influence on evaluation results. This paper presents HW-TSC’s submission to the WMT23 MT Test Suites shared task. We propose a systematic approach for construing challenge sets from four aspects: word difficulty, length difficulty, grammar difficulty and model learning difficulty. We open-source two Multifaceted Challenge Sets for Zh→En and En→Zh. We also present results of participants in this year’s General MT shared task on our test sets.
2022
pdf
abs
Exploring Robustness of Machine Translation Metrics: A Study of Twenty-Two Automatic Metrics in the WMT22 Metric Task
Xiaoyu Chen
|
Daimeng Wei
|
Hengchao Shang
|
Zongyao Li
|
Zhanglin Wu
|
Zhengzhe Yu
|
Ting Zhu
|
Mengli Zhu
|
Ning Xie
|
Lizhi Lei
|
Shimin Tao
|
Hao Yang
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
Contextual word embeddings extracted from pre-trained models have become the basis for many downstream NLP tasks, including machine translation automatic evaluations. Metrics that leverage embeddings claim better capture of synonyms and changes in word orders, and thus better correlation with human ratings than surface-form matching metrics (e.g. BLEU). However, few studies have been done to examine robustness of these metrics. This report uses a challenge set to uncover the brittleness of reference-based and reference-free metrics. Our challenge set1 aims at examining metrics’ capability to correlate synonyms in different areas and to discern catastrophic errors at both word- and sentence-levels. The results show that although embedding-based metrics perform relatively well on discerning sentence-level negation/affirmation errors, their performances on relating synonyms are poor. In addition, we find that some metrics are susceptible to text styles so their generalizability compromised.
2020
pdf
abs
Efficient Transfer Learning for Quality Estimation with Bottleneck Adapter Layer
Hao Yang
|
Minghan Wang
|
Ning Xie
|
Ying Qin
|
Yao Deng
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation
The Predictor-Estimator framework for quality estimation (QE) is commonly used for its strong performance. Where the predictor and estimator works on feature extraction and quality evaluation, respectively. However, training the predictor from scratch is computationally expensive. In this paper, we propose an efficient transfer learning framework to transfer knowledge from NMT dataset into QE models. A Predictor-Estimator alike model named BAL-QE is also proposed, aiming to extract high quality features with pre-trained NMT model, and make classification with a fine-tuned Bottleneck Adapter Layer (BAL). The experiment shows that BAL-QE achieves 97% of the SOTA performance in WMT19 En-De and En-Ru QE tasks by only training 3% of parameters within 4 hours on 4 Titan XP GPUs. Compared with the commonly used NuQE baseline, BAL-QE achieves 47% (En-Ru) and 75% (En-De) of performance promotions.
pdf
abs
The HW-TSC Video Speech Translation System at IWSLT 2020
Minghan Wang
|
Hao Yang
|
Yao Deng
|
Ying Qin
|
Lizhi Lei
|
Daimeng Wei
|
Hengchao Shang
|
Ning Xie
|
Xiaochun Li
|
Jiaxian Guo
Proceedings of the 17th International Conference on Spoken Language Translation
The paper presents details of our system in the IWSLT Video Speech Translation evaluation. The system works in a cascade form, which contains three modules: 1) A proprietary ASR system. 2) A disfluency correction system aims to remove interregnums or other disfluent expressions with a fine-tuned BERT and a series of rule-based algorithms. 3) An NMT System based on the Transformer and trained with massive publicly available corpus.