2025
pdf
bib
abs
V-Oracle: Making Progressive Reasoning in Deciphering Oracle Bones for You and Me
Runqi Qiao
|
Qiuna Tan
|
Guanting Dong
|
MinhuiWu MinhuiWu
|
Jiapeng Wang
|
YiFan Zhang
|
Zhuoma GongQue
|
Chong Sun
|
Yida Xu
|
Yadong Xue
|
Ye Tian
|
Zhimin Bao
|
Lan Yang
|
Chen Li
|
Honggang Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Oracle Bone Script (OBS) is a vital treasure of human civilization, rich in insights from ancient societies. However, the evolution of written language over millennia complicates its decipherment. In this paper, we propose V-Oracle, an innovative framework that utilizes Large Multi-modal Models (LMMs) for interpreting OBS. V-Oracle applies principles of pictographic character formation and frames the task as a visual question-answering (VQA) problem, establishing a multi-step reasoning chain. It proposes a multi-dimensional data augmentation for synthesizing high-quality OBS samples, and also implements a multi-phase oracle alignment tuning to improve LMMs’ visual reasoning capabilities. Moreover, to bridge the evaluation gap in the OBS field, we further introduce Oracle-Bench, a comprehensive benchmark that emphasizes process-oriented assessment and incorporates both standard and out-of-distribution setups for realistic evaluation. Extensive experimental results can demonstrate the effectiveness of our method in providing quantitative analyses and superior deciphering capability.
pdf
bib
abs
Don’t Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls
Ante Wang
|
Linfeng Song
|
Ye Tian
|
Dian Yu
|
Haitao Mi
|
Xiangyu Duan
|
Zhaopeng Tu
|
Jinsong Su
|
Dong Yu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advancements in tree search algorithms guided by verifiers have significantly enhanced the reasoning capabilities of large language models (LLMs), but at the cost of increased computational resources. In this work, we identify two key challenges contributing to this inefficiency: over-exploration due to redundant states with semantically equivalent content, and under-exploration caused by high variance in verifier scoring leading to frequent trajectory switching. To address these issues, we propose FETCH – an e ffici ent tree sear ch framework, which is a flexible, plug-and-play system compatible with various tree search algorithms.Our framework mitigates over-exploration by merging semantically similar states using agglomerative clustering of text embeddings obtained from a fine-tuned SimCSE model. To tackle under-exploration, we enhance verifiers by incorporating temporal difference learning with adjusted 𝜆-returns during training to reduce variance, and employing a verifier ensemble to aggregate scores during inference. Experiments on GSM8K, GSM-Plus, and MATH datasets demonstrate that our methods significantly improve reasoning accuracy and computational efficiency across four different tree search algorithms, paving the way for more practical applications of LLM-based reasoning. The code is available at https://github.com/DeepLearnXMU/Fetch.
pdf
bib
abs
Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching
Xiaoying Zhang
|
Baolin Peng
|
Ye Tian
|
Jingyan Zhou
|
Yipeng Zhang
|
Haitao Mi
|
Helen M. Meng
Findings of the Association for Computational Linguistics: ACL 2025
Large language models (LLMs) often struggle to provide up-to-date information due to their one-time training and the constantly evolving nature of the world. To keep LLMs current, existing approaches typically involve continued pre-training on new documents. However, they frequently face difficulties in extracting stored knowledge. Motivated by the remarkable success of the Feynman Technique in efficient human learning, we introduce Self-Tuning, a learning framework aimed at improving an LLM’s ability to effectively acquire new knowledge from unseen raw documents through self-teaching. Specifically, we develop a Self-Teaching strategy that augments the documents with a set of knowledge-intensive tasks created in a self-supervised manner, focusing on three crucial aspects: memorization, comprehension, and self-reflection. Additionally, we introduce three Wiki-Newpages-2023-QA datasets to facilitate an in-depth analysis of an LLM’s knowledge acquisition ability concerning memorization, extraction, and reasoning. Extensive experimental results on various models, e.g., Llama2-7B reveal that Self-Tuning consistently exhibits superior performance across all knowledge acquisition tasks and excels in preserving previous knowledge.
pdf
bib
abs
Multi-Agent Collaboration via Cross-Team Orchestration
Zhuoyun Du
|
Chen Qian
|
Wei Liu
|
Zihao Xie
|
YiFei Wang
|
Rennai Qiu
|
Yufan Dang
|
Weize Chen
|
Cheng Yang
|
Ye Tian
|
Xuantang Xiong
|
Lei Han
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) have significantly impacted various domains, especially through organized LLM-driven autonomous agents. A representative scenario is in software development, where agents can collaborate in a team like humans, following predefined phases to complete sub-tasks sequentially. However, for an agent team, each phase yields only one possible outcome. This results in the completion of only one development chain, thereby losing the opportunity to explore multiple potential decision paths within the solution space. Consequently leading to suboptimal results or extensive trial and error. To address this, we introduce Cross-Team Orchestration (Croto), a scalable multi-team framework that enables orchestrated teams to jointly propose various task-oriented solutions and interact with their insights in a self-independence while cross-team collaboration environment for superior solutions generation. Experiments reveal a notable increase in software quality compared to state-of-the-art baselines. We further tested our framework on story generation tasks, which demonstrated a promising generalization ability of our framework in other domains. The code and data is available at https://github.com/OpenBMB/ChatDev/tree/macnet
pdf
bib
abs
GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language for Knowledge Graph Completion
Kangyang Luo
|
Yuzhuo Bai
|
Cheng Gao
|
Shuzheng Si
|
Zhu Liu
|
Yingli Shen
|
Zhitong Wang
|
Cunliang Kong
|
Wenhao Li
|
Yufei Huang
|
Ye Tian
|
Xuantang Xiong
|
Lei Han
|
Maosong Sun
Findings of the Association for Computational Linguistics: ACL 2025
Knowledge Graph Completion (KGC), which aims to infer missing or incomplete facts, is a crucial task for KGs. However, integrating the vital structural information of KGs into Large Language Models (LLMs) and outputting predictions deterministically remains challenging. To address this, we propose a new method called GLTW, which encodes the structural information of KGs and merges it with LLMs to enhance KGC performance. Specifically, we introduce an improved Graph Transformer (iGT) that effectively encodes subgraphs with both local and global structural information and inherits the characteristics of language model, bypassing training from scratch. Also, we develop a subgraph-based multi-classification training objective, using all entities within KG as classification objects, to boost learning efficiency. Importantly, we combine iGT with an LLM that takes KG language prompts as input. Our extensive experiments on various KG datasets show that GLTW achieves significant performance gains compared to SOTA baselines.
2024
pdf
bib
abs
Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation
Xiaoying Zhang
|
Baolin Peng
|
Ye Tian
|
Jingyan Zhou
|
Lifeng Jin
|
Linfeng Song
|
Haitao Mi
|
Helen Meng
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite showing impressive abilities, large language models (LLMs) often struggle with factual inaccuracies, i.e., ”hallucinations”, even when they hold relevant knowledge. To mitigate these hallucinations, current approaches typically necessitate high-quality human factuality annotations. In this work, we explore Self-Alignment for Factuality, where we leverage the self-evaluation capability of an LLM to provide training signals that steer the model towards factuality. Specifically, we incorporate Self-Eval, a self-evaluation component, to prompt an LLM to validate the factuality of its own generated responses solely based on its internal knowledge. Additionally, we design Self-Knowledge Tuning (SK-Tuning) to augment the LLM’s self-evaluation ability by improving the model’s confidence estimation and calibration. We then utilize these self-annotated responses to fine-tune the model via Direct Preference Optimization algorithm. We show that the proposed self-alignment approach substantially enhances factual accuracy over Llama family models across three key knowledge-intensive tasks on TruthfulQA and BioGEN.
pdf
bib
abs
Improving LLM Generations via Fine-Grained Self-Endorsement
Ante Wang
|
Linfeng Song
|
Baolin Peng
|
Lifeng Jin
|
Ye Tian
|
Haitao Mi
|
Jinsong Su
|
Dong Yu
Findings of the Association for Computational Linguistics: ACL 2024
This work studies mitigating fact-conflicting hallucinations for large language model (LLM) at inference time.Particularly, we propose a self-endorsement framework that leverages the fine-grained fact-level comparisons across multiple sampled responses.Compared with prior ensemble methods (e.g., self-consistency) that perform response-level selection, our approach can better alleviate hallucinations for knowledge-intensive tasks.Our approach can broadly benefit smaller and open-source LLMs as it mainly conducts simple content-based comparisons.Experiments on Biographies show that our method can effectively improve the factuality of generations with simple and intuitive prompts across different scales of LLMs.Besides, comprehensive analyses on TriviaQA and GSM8K demonstrate the potential of self-endorsement for broader application.
pdf
bib
Self-Consistency Boosts Calibration for Math Reasoning
Ante Wang
|
Linfeng Song
|
Ye Tian
|
Baolin Peng
|
Lifeng Jin
|
Haitao Mi
|
Jinsong Su
|
Dong Yu
Findings of the Association for Computational Linguistics: EMNLP 2024
2023
pdf
bib
abs
Learning From Free-Text Human Feedback – Collect New Datasets Or Extend Existing Ones?
Dominic Petrak
|
Nafise Moosavi
|
Ye Tian
|
Nikolai Rozanov
|
Iryna Gurevych
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI. Instead of collecting and annotating new datasets from scratch, recent advances in synthetic dialog generation could be used to augment existing dialog datasets with the necessary annotations. However, to assess the feasibility of such an effort, it is important to know the types and frequency of free-text human feedback included in these datasets. In this work, we investigate this question for a variety of commonly used dialog datasets, including MultiWoZ, SGD, BABI, PersonaChat, Wizardsof-Wikipedia, and the human-bot split of the Self-Feeding Chatbot. Using our observations, we derive new taxonomies for the annotation of free-text human feedback in dialogs and investigate the impact of including such data in response generation for three SOTA language generation models, including GPT-2, LLAMA, and Flan-T5. Our findings provide new insights into the composition of the datasets examined, including error types, user response types, and the relations between them.
pdf
bib
abs
How Are Idioms Processed Inside Transformer Language Models?
Ye Tian
|
Isobel James
|
Hye Son
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Idioms such as “call it a day” and “piece of cake,” are prevalent in natural language. How do Transformer language models process idioms? This study examines this question by analysing three models - BERT, Multilingual BERT, and DistilBERT. We compare the embeddings of idiomatic and literal expressions across all layers of the networks at both the sentence and word levels. Additionally, we investigate the attention directed from other sentence tokens towards a word within an idiom as opposed to in a literal context. Results indicate that while the three models exhibit slightly different internal mechanisms, they all represent idioms distinctively compared to literal language, with attention playing a critical role. These findings suggest that idioms are semantically and syntactically idiosyncratic, not only for humans but also for language models.
2022
pdf
bib
abs
Huawei BabelTar NMT at WMT22 Biomedical Translation Task: How We Further Improve Domain-specific NMT
Weixuan Wang
|
Xupeng Meng
|
Suqing Yan
|
Ye Tian
|
Wei Peng
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes Huawei Artificial Intelligence Application Research Center’s neural machine translation system (“BabelTar”). Our submission to the WMT22 biomedical translation shared task covers language directions between English and the other seven languages (French, German, Italian, Spanish, Portuguese, Russian, and Chinese). During the past four years, our participation in this domain-specific track has witnessed a paradigm shift of methodology from a purely data-driven focus to embracing diversified techniques, including pre-trained multilingual NMT models, homograph disambiguation, ensemble learning, and preprocessing methods. We illustrate practical insights and measured performance improvements relating to how we further improve our domain-specific NMT system.
2021
pdf
bib
abs
Cross-Lingual Transfer with MAML on Trees
Jezabel Garcia
|
Federica Freddi
|
Jamie McGowan
|
Tim Nieradzik
|
Feng-Ting Liao
|
Ye Tian
|
Da-shan Shiu
|
Alberto Bernacchia
Proceedings of the Second Workshop on Domain Adaptation for NLP
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related. Sharing information between unrelated tasks might hurt performance, and it is unclear how to transfer knowledge across tasks that have a hierarchical structure. Our research extends a meta-learning model, MAML, by exploiting hierarchical task relationships. Our algorithm, TreeMAML, adapts the model to each task with a few gradient steps, but the adaptation follows the hierarchical tree structure: in each step, gradients are pooled across tasks clusters and subsequent steps follow down the tree. We also implement a clustering algorithm that generates the tasks tree without previous knowledge of the task structure, allowing us to make use of implicit relationships between the tasks. We show that TreeMAML successfully trains natural language processing models for cross-lingual Natural Language Inference by taking advantage of the language phylogenetic tree. This result is useful since most languages in the world are under-resourced and the improvement on cross-lingual transfer allows the internationalization of NLP models.
pdf
bib
abs
How does BERT process disfluency?
Ye Tian
|
Tim Nieradzik
|
Sepehr Jalali
|
Da-shan Shiu
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Natural conversations are filled with disfluencies. This study investigates if and how BERT understands disfluency with three experiments: (1) a behavioural study using a downstream task, (2) an analysis of sentence embeddings and (3) an analysis of the attention mechanism on disfluency. The behavioural study shows that without fine-tuning on disfluent data, BERT does not suffer significant performance loss when presented disfluent compared to fluent inputs (exp1). Analysis on sentence embeddings of disfluent and fluent sentence pairs reveals that the deeper the layer, the more similar their representation (exp2). This indicates that deep layers of BERT become relatively invariant to disfluency. We pinpoint attention as a potential mechanism that could explain this phenomenon (exp3). Overall, the study suggests that BERT has knowledge of disfluency structure. We emphasise the potential of using BERT to understand natural utterances without disfluency removal.
2020
pdf
bib
abs
Learning a Multi-Domain Curriculum for Neural Machine Translation
Wei Wang
|
Ye Tian
|
Jiquan Ngiam
|
Yinfei Yang
|
Isaac Caswell
|
Zarana Parekh
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Most data selection research in machine translation focuses on improving a single domain. We perform data selection for multiple domains at once. This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-of-domain. In large-scale experiments, the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training.
2018
pdf
bib
abs
Aggression Identification and Multi Lingual Word Embeddings
Thiago Galery
|
Efstathios Charitos
|
Ye Tian
Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)
The system presented here took part in the 2018 Trolling, Aggression and Cyberbullying shared task (Forest and Trees team) and uses a Gated Recurrent Neural Network architecture (Cho et al., 2014) in an attempt to assess whether combining pre-trained English and Hindi fastText (Mikolov et al., 2018) word embeddings as a representation of the sequence input would improve classification performance. The motivation for this comes from the fact that the shared task data for English contained many Hindi tokens and therefore some users might be doing code-switching: the alternation between two or more languages in communication. To test this hypothesis, we also aligned Hindi and English vectors using pre-computed SVD matrices that pulls representations from different languages into a common space (Smith et al., 2017). Two conditions were tested: (i) one with standard pre-trained fastText word embeddings where each Hindi word is treated as an OOV token, and (ii) another where word embeddings for Hindi and English are loaded in a common vector space, so Hindi tokens can be assigned a meaningful representation. We submitted the second (i.e., multilingual) system and obtained the scores of 0.531 weighted F1 for the EN-FB dataset and 0.438 weighted F1 for the EN-TW dataset.
pdf
bib
abs
Treat the system like a human student: Automatic naturalness evaluation of generated text without reference texts
Isabel Groves
|
Ye Tian
|
Ioannis Douratsos
Proceedings of the 11th International Conference on Natural Language Generation
The current most popular method for automatic Natural Language Generation (NLG) evaluation is comparing generated text with human-written reference sentences using a metrics system, which has drawbacks around reliability and scalability. We draw inspiration from second language (L2) assessment and extract a set of linguistic features to predict human judgments of sentence naturalness. Our experiment using a small dataset showed that the feature-based approach yields promising results, with the added potential of providing interpretability into the source of the problems.
2017
pdf
bib
abs
Facebook sentiment: Reactions and Emojis
Ye Tian
|
Thiago Galery
|
Giulio Dulcinati
|
Emilia Molimpakis
|
Chao Sun
Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media
Emojis are used frequently in social media. A widely assumed view is that emojis express the emotional state of the user, which has led to research focusing on the expressiveness of emojis independent from the linguistic context. We argue that emojis and the linguistic texts can modify the meaning of each other. The overall communicated meaning is not a simple sum of the two channels. In order to study the meaning interplay, we need data indicating the overall sentiment of the entire message as well as the sentiment of the emojis stand-alone. We propose that Facebook Reactions are a good data source for such a purpose. FB reactions (e.g. “Love” and “Angry”) indicate the readers’ overall sentiment, against which we can investigate the types of emojis used the comments under different reaction profiles. We present a data set of 21,000 FB posts (57 million reactions and 8 million comments) from public media pages across four countries.
2016
pdf
bib
abs
DUEL: A Multi-lingual Multimodal Dialogue Corpus for Disfluency, Exclamations and Laughter
Julian Hough
|
Ye Tian
|
Laura de Ruiter
|
Simon Betz
|
Spyros Kousidis
|
David Schlangen
|
Jonathan Ginzburg
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese. The corpus is uniquely positioned as a cross-linguistic, multimodal dialogue resource controlled for domain. DUEL includes audio, video and body tracking data and is transcribed and annotated for disfluency, laughter and exclamations.
pdf
bib
When do we laugh?
Ye Tian
|
Chiara Mazzocconi
|
Jonathan Ginzburg
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
2012
pdf
bib
Update Summarization using a Multi-level Hierarchical Dirichlet Process Model
Jiwei Li
|
Sujian Li
|
Xun Wang
|
Ye Tian
|
Baobao Chang
Proceedings of COLING 2012
pdf
bib
Fine-Grained Classification of Named Entities by Fusing Multi-Features
Wenjie Li
|
Jiwei Li
|
Ye Tian
|
Zhifang Sui
Proceedings of COLING 2012: Posters
pdf
bib
The CIPS-SIGHAN CLP 2012 ChineseWord Segmentation onMicroBlog Corpora Bakeoff
Huiming Duan
|
Zhifang Sui
|
Ye Tian
|
Wenjie Li
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing
2009
pdf
bib
A Novel Method of Sentence Ordering Based on Support Vector Machine
Gongfu Peng
|
Yanxiang He
|
Ye Tian
|
Yingsheng Tian
|
Weidong Wen
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2