Yiyang Kang
2025
DUTIR at SemEval-2025 Task 10: A Large Language Model-based Approach for Entity Framing in Online News
Tengxiao Lv
|
Juntao Li
|
Chao Liu
|
Yiyang Kang
|
Ling Luo
|
Yuanyuan Sun
|
Hongfei Lin
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
We propose a multilingual text processing framework that combines multilingual translation with data augmentation, QLoRA-based multi-model fine-tuning, and GLM-4-Plus-based ensemble classification. By using GLM-4-Plus to translate multilingual texts into English, we enhance data diversity and quantity. Data augmentation effectively improves the model’s performance on imbalanced datasets. QLoRA fine-tuning optimizes the model and reduces classification loss. GLM-4-Plus, as a meta-classifier, further enhances system performance. Our system achieved first place in three languages (English, Portuguese and Russian).
111DUT at SemEval-2025 Task 8:Hierarchical Chain-of-Thought Reasoning and Multi-Model Deliberation for Robust TableQA
Jiaqi Yao
|
Erchen Yu
|
Yicen Tian
|
Yiyang Kang
|
Jiayi Zhang
|
Hongfei Lin
|
Linlin Zong
|
Bo Xu
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
The proliferation of structured tabular data in domains like healthcare and finance has intensified the demand for precise table question answering, particularly for complex numerical reasoning and cross-domain generalization. Existing approaches struggle with implicit semantics and multi-step arithmetic operations. This paper presents our solution for SemEval-2025 task,including three synergistic components: (1) a Schema Profiler that extracts structural metadata via LLM-driven analysis and statistical validation, (2) a Hierarchical Chain-of-Thought module that decomposes questions into four stages(semantic anchoring, schema mapping, query synthesis, and self-correction)to ensure SQL validity, and (3) a Confidence-Accuracy Voting mechanism that resolves discrepancies across LLMs through weighted ensemble decisions. Our framework achieves scores of 81.23 on Databench and 81.99 on Databench_lite, ranking 6th and 5th respectively, demonstrating the effectiveness of structured metadata guidance and cross-model deliberation in complex TableQA scenarios.
Search
Fix author
Co-authors
- Hongfei Lin (林鸿飞) 2
- Juntao Li 1
- Chao Liu 1
- Ling Luo 1
- Tengxiao Lv 1
- show all...