Yang An
2024
Refining Corpora from a Model Calibration Perspective for Chinese Spelling Correction
Dingyao Yu
|
Yang An
|
Wei Ye
|
Xiongfeng Xiao
|
Shaoguang Mao
|
Tao Ge
|
Shikun Zhang
Findings of the Association for Computational Linguistics ACL 2024
Chinese Spelling Correction (CSC) commonly lacks large-scale high-quality corpora, due to the labor-intensive labeling of spelling errors in real-life human writing or typing scenarios. Two data augmentation methods are widely adopted: (1) *Random Replacement* with the guidance of confusion sets and (2) *OCR/ASR-based Generation* that simulates character misusing. However, both methods inevitably introduce noisy data (e.g., false spelling errors), potentially leading to over-correction. By carefully analyzing the two types of corpora, we find that though the latter achieves more robust generalization performance, the former yields better-calibrated CSC models. We then provide a theoretical analysis of this empirical observation, based on which a corpus refining strategy is proposed. Specifically, OCR/ASR-based data samples are fed into a well-calibrated CSC model trained on random replacement-based corpora and then filtered based on prediction confidence. By learning a simple BERT-based model on the refined OCR/ASR-based corpus, we set up impressive state-of-the-art performance on three widely-used benchmarks, while significantly alleviating over-correction (e.g., lowering false positive predictions).
2020
Evaluation Metrics for Headline Generation Using Deep Pre-Trained Embeddings
Abdul Moeed
|
Yang An
|
Gerhard Hagerer
|
Georg Groh
Proceedings of the Twelfth Language Resources and Evaluation Conference
With the explosive growth in textual data, it is becoming increasingly important to summarize text automatically. Recently, generative language models have shown promise in abstractive text summarization tasks. Since these models rephrase text and thus use similar but different words as found in the summarized text, existing metrics such as ROUGE that use n-gram overlap may not be optimal. Therefore we evaluate two embedding-based evaluation metrics that are applicable to abstractive summarization: Fr ́echet embedding distance, which has been introduced recently, and angular embedding similarity, which is our proposed metric. To demonstrate the utility of both metrics, we analyze the headline generation capacity of two state-of-the-art language models: GPT-2 and ULMFiT. In particular, our proposed metric shows close relation with human judgments in our experiments and has overall better correlations with them. To provide reproducibility, the source code plus human assessments of our experiments is available on GitHub.
Search
Co-authors
- Dingyao Yu 1
- Wei Ye 1
- Xiongfeng Xiao 1
- Shaoguang Mao 1
- Tao Ge 1
- show all...