Zhihao Zhang
Soochow
Other people with similar names: Zhihao Zhang (May refer to several people)
2025
Zero-shot Cross-lingual NER via Mitigating Language Difference: An Entity-aligned Translation Perspective
Zhihao Zhang
|
Sophia Yat Mei Lee
|
Dong Zhang
|
Shoushan Li
|
Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2025
Cross-lingual Named Entity Recognition (CL-NER) aims to transfer knowledge from high-resource languages to low-resource languages. However, existing zero-shot CL-NER (ZCL-NER) approaches primarily focus on Latin script language (LSL), where shared linguistic features facilitate effective knowledge transfer. In contrast, for non-Latin script language (NSL), such as Chinese and Japanese, performance often degrades due to deep structural differences. To address these challenges, we propose an entity-aligned translation (EAT) approach. Leveraging large language models (LLMs), EAT employs a dual-translation strategy to align entities between NSL and English. In addition, we fine-tune LLMs using multilingual Wikipedia data to enhance the entity alignment from source to target languages.
2024
Cross-domain NER with Generated Task-Oriented Knowledge: An Empirical Study from Information Density Perspective
Zhihao Zhang
|
Sophia Yat Mei Lee
|
Junshuang Wu
|
Dong Zhang
|
Shoushan Li
|
Erik Cambria
|
Guodong Zhou
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Cross-domain Named Entity Recognition (CDNER) is crucial for Knowledge Graph (KG) construction and natural language processing (NLP), enabling learning from source to target domains with limited data. Previous studies often rely on manually collected entity-relevant sentences from the web or attempt to bridge the gap between tokens and entity labels across domains. These approaches are time-consuming and inefficient, as these data are often weakly correlated with the target task and require extensive pre-training.To address these issues, we propose automatically generating task-oriented knowledge (GTOK) using large language models (LLMs), focusing on the reasoning process of entity extraction. Then, we employ task-oriented pre-training (TOPT) to facilitate domain adaptation. Additionally, current cross-domain NER methods often lack explicit explanations for their effectiveness. Therefore, we introduce the concept of information density to better evaluate the model’s effectiveness before performing entity recognition.We conduct systematic experiments and analyses to demonstrate the effectiveness of our proposed approach and the validity of using information density for model evaluation.
Search
Fix author
Co-authors
- Sophia Yat Mei Lee 2
- Shoushan Li (李寿山) 2
- Dong Zhang 2
- Guodong Zhou (周国栋) 2
- Erik Cambria 1
- show all...