Fei Teng

Also published as:


2025

pdf bib
Exposing Numeracy Gaps: A Benchmark to Evaluate Fundamental Numerical Abilities in Large Language Models
Haoyang Li | Xuejia Chen | Zhanchao Xu | Darian Li | Nicole Hu | Fei Teng | Yiming Li | Luyu Qiu | Chen Jason Zhang | Li Qing | Lei Chen
Findings of the Association for Computational Linguistics: ACL 2025

Large Language Models (LLMs) have demonstrated impressive capabilities in natural language processing tasks, such as text generation and semantic understanding. However, their performance on numerical reasoning tasks, such as basic arithmetic, numerical retrieval, and magnitude comparison, remains surprisingly poor. This gap arises from their reliance on surface-level statistical patterns rather than understanding numbers as continuous magnitudes. Existing benchmarks primarily focus on either linguistic competence or structured mathematical problem-solving, neglecting fundamental numerical reasoning required in real-world scenarios. To bridge this gap, we propose NumericBench, a comprehensive benchmark to evaluate six fundamental numerical capabilities: number recognition, arithmetic operations, contextual retrieval, comparison, summary, and multi-step reasoning. NumericBench includes datasets ranging from synthetic number lists to crawled real-world data, addressing challenges like long contexts, noise, and multi-step reasoning. Extensive experiments on state-of-the-art LLMs, including GPT-4 and DeepSeek, reveal persistent weaknesses in numerical reasoning, highlighting the urgent need to improve numerically-aware language modeling. The benchmark is released in: https://github.com/TreeAI-Lab/NumericBench.

2024

pdf bib
Can LLMs Learn from Previous Mistakes? Investigating LLMs’ Errors to Boost for Reasoning
Yongqi Tong | Dawei Li | Sizhe Wang | Yujia Wang | Fei Teng | Jingbo Shang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Large language models (LLMs) have demonstrated striking reasoning capability. Recent works have shown the benefits to LLMs from fine-tuning golden-standard Chain-of-Thought (CoT) rationales or using them as correct examples in few-shot prompting. While humans can indeed imitate correct examples, learning from our mistakes is another vital aspect of human cognition. Hence, a question naturally arises: can LLMs learn and benefit from their mistakes, especially for their reasoning?This study investigates this problem from both the prompting and model-tuning perspectives. We begin by introducing CoTErrorSet, a new benchmark with 609,432 questions, each designed with both correct and error references, and demonstrating the types and reasons for making such mistakes. To explore the effectiveness of those mistakes, we design two methods: (1) Self-rethinking prompting guides LLMs to rethink whether they have made similar previous mistakes; and (2) Mistake tuning involves finetuning models in both correct and incorrect reasoning domains, rather than only tuning models to learn ground truth in traditional methodology. We conduct a series of experiments to prove LLMs can obtain benefits from mistakes in both directions. Our two methods offer potentially cost-effective strategies by leveraging errors to enhance reasoning capabilities, which costs significantly less than creating meticulously hand-crafted golden references. We ultimately make a thorough analysis of the reasons behind LLMs’ errors, which provides directions that future research needs to overcome. CoTErrorSet will be published soon on https://github.com/YookiTong/Learn-from-Mistakes-CotErrorSet.

pdf bib
维沃手语数字人翻译系统
Junyuan He (何俊远) | Xin Liu (刘鑫) | Murong Yang (杨牧融) | Xiaolong Li (李小龙) | Xuming Huang (黄旭铭) | Fei Teng (滕飞) | Xiaoxin Chen (陈晓昕) | Fan Fu (付凡)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)

“本文介绍了我们在第二十四届中国计算语言学大会手语数字人翻译质量评测中提交的参赛系统。本次评测任务旨在评测手语数字人将汉语翻译成中国手语方面的自然性和准确性。本文介绍的手语数字人翻译系统首先通过手语翻译算法将汉语文本翻译成手语文本,然后将手语文本对应的手语动作单元运用动作融合算法合成为自然、完整的手语数字人动作,同时借助面部驱动算法将口型、表情等非语言元素自然地融入手语合成中,实现带微表情的和唇形同步的手语数字人。最终,我们在官方手语数字人翻译质量的人工评测集上取得了3.513的综合评分,获得了该任务第一名的成绩。”