2025
pdf
bib
abs
TEaR: Improving LLM-based Machine Translation with Systematic Self-Refinement
Zhaopeng Feng
|
Yan Zhang
|
Hao Li
|
Bei Wu
|
Jiayu Liao
|
Wenqiang Liu
|
Jun Lang
|
Yang Feng
|
Jian Wu
|
Zuozhu Liu
Findings of the Association for Computational Linguistics: NAACL 2025
Large Language Models (LLMs) have achieved impressive results in Machine Translation (MT). However, human evaluations reveal that LLM-generated translations still contain various errors. Notably, feeding the error information back into the LLMs can facilitate self-refinement, leading to enhanced translation quality. Motivated by these findings, we introduce TEaR (Translate, Estimate, and Refine), a systematic LLM-based self-refinement framework aimed at bootstrapping translation performance. Our key results show that: 1) TEaR framework enables LLMs to improve their translation quality relying solely on self-feedback, measured by both automatic metrics and Multidimensional Quality Metrics (MQM) scores; 2) TEaR autonomously selects improvements, ensuring a robust translation quality baseline while outperforming both internal refinement and external feedback methods. Error analysis and iterative refinement experiments show its ability to continuously reduce translation errors and enhance overall translation quality. Our code and data are publicly available at https://github.com/fzp0424/self_correct_mt.
pdf
bib
abs
Test-Time Code-Switching for Cross-lingual Aspect Sentiment Triplet Extraction
Dongming Sheng
|
Kexin Han
|
Hao Li
|
Yan Zhang
|
Yucheng Huang
|
Jun Lang
|
Wenqiang Liu
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Aspect Sentiment Triplet Extraction (ASTE) is a thriving research area with impressive outcomes being achieved on high-resource languages. However, the application of cross-lingual transfer to the ASTE task has been relatively unexplored, and current code-switching methods still suffer from term boundary detection issues and out-of-dictionary problems. In this study, we introduce a novel Test-Time Code-SWitching (TT-CSW) framework, which bridges the gap between the bilingual training phase and the monolingual test-time prediction. During training, a generative model is developed based on bilingual code-switched training data and can produce bilingual ASTE triplets for bilingual inputs. In the testing stage, we employ an alignment-based code-switching technique for test-time augmentation. Extensive experiments on cross-lingual ASTE datasets validate the effectiveness of our proposed method. We achieve an average improvement of 3.7% in terms of weighted-averaged F1 in four datasets with different languages. Additionally, we set a benchmark using ChatGPT and GPT-4, and demonstrate that even smaller generative models fine-tuned with our proposed TT-CSW framework surpass ChatGPT and GPT-4 by 14.2% and 5.0% respectively.
2023
pdf
bib
abs
PRAM: An End-to-end Prototype-based Representation Alignment Model for Zero-resource Cross-lingual Named Entity Recognition
Yucheng Huang
|
Wenqiang Liu
|
Xianli Zhang
|
Jun Lang
|
Tieliang Gong
|
Chen Li
Findings of the Association for Computational Linguistics: ACL 2023
Zero-resource cross-lingual named entity recognition (ZRCL-NER) aims to leverage rich labeled source language data to address the NER problem in the zero-resource target language. Existing methods are built either based on data transfer or representation transfer. However, the former usually leads to additional computation costs, and the latter lacks explicit optimization specific to the NER task. To overcome the above limitations, we propose a novel prototype-based representation alignment model (PRAM) for the challenging ZRCL-NER task. PRAM models the cross-lingual (CL) NER task and transfers knowledge from source languages to target languages in a unified neural network, and performs end-to-end training, avoiding additional computation costs. Moreover, PRAM borrows the CL inference ability of multilingual language models and enhances it with a novel training objective—attribution-prediction consistency (APC)—for explicitly enforcing the entity-level alignment between entity representations and predictions, as well as that across languages using prototypes as bridges. The experimental results show that PRAM significantly outperforms existing state-of-the-art methods, especially in some challenging scenarios.
2017
pdf
bib
abs
Dependency Parsing with Partial Annotations: An Empirical Comparison
Yue Zhang
|
Zhenghua Li
|
Jun Lang
|
Qingrong Xia
|
Min Zhang
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graph-based parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.
2014
pdf
bib
An Iterative Link-based Method for Parallel Web Page Mining
Le Liu
|
Yu Hong
|
Jun Lu
|
Jun Lang
|
Heng Ji
|
Jianmin Yao
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
2010
pdf
bib
I2R’s machine translation system for IWSLT 2010
Xiangyu Duan
|
Rafael Banchs
|
Jun Lang
|
Deyi Xiong
|
Aiti Aw
|
Min Zhang
|
Haizhou Li
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign
2008
pdf
bib
An Entity-Mention Model for Coreference Resolution with Inductive Logic Programming
Xiaofeng Yang
|
Jian Su
|
Jun Lang
|
Chew Lim Tan
|
Ting Liu
|
Sheng Li
Proceedings of ACL-08: HLT