Yao Fu
Papers on this page may belong to the following people: Yao Fu, Yao Fu
2026
Beyond Blind Following: Evaluating Robustness of LLM Agents under Imperfect Guidance
Yao Fu | Ran Qiu | Xinhe Wang | Jacob Sansom | Sathvika Ayyappa Prabhu | Huijie Tang | Jaekyeom Kim | Sungryull Sohn | Honglak Lee
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Yao Fu | Ran Qiu | Xinhe Wang | Jacob Sansom | Sathvika Ayyappa Prabhu | Huijie Tang | Jaekyeom Kim | Sungryull Sohn | Honglak Lee
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have shown strong capabilities as task-solving agents across interactive domains. However, in complex environments, these agents may need to rely on auxiliary guidance to reduce the search space or make up for limited domain-specific knowledge. Such guidance includes human-provided manuals and demonstrations, retrieved examples from memory or external tools, high-level heuristics, and agent-acquired knowledge from prior interactions. However, this guidance may be imperfect. For example, due to changes in the environment, ambiguous or simplified language, or retrieval errors from external sources, guidance can be incomplete, outdated, or contextually mismatched, potentially causing errors or failures during task execution. To address this, we introduce MIRAGE, a benchmark for MeasurIng Robustness of LLM Agents under Imperfect GuidancE. MIRAGE includes procedurally generated environments in navigation, cooking, and gaming, where both the environment and the auxiliary guidance vary in fidelity and relevance. We further extend MIRAGE to realistic web tasks via WebArena, using noisy or underspecified instructions extracted from demonstrations. Our findings reveal critical failure modes in current LLM agents and motivate future work on improving their robustness under imperfect guidance.
2025
When Truthful Representations Flip Under Deceptive Instructions?
Xianxuan Long | Yao Fu | Runchao Li | Mu Sheng | Haotian Yu | Xiaotian Han | Pan Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Xianxuan Long | Yao Fu | Runchao Li | Mu Sheng | Haotian Yu | Xiaotian Han | Pan Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) tend to follow maliciously crafted instructions to generate deceptive responses, posing safety challenges. How deceptive instructions alter the internal representations of LLM compared to truthful ones remains poorly understood beyond output analysis. To bridge this gap, we investigate when and how these representations “flip”, such as from truthful to deceptive, under deceptive versus truthful/neutral instructions. Analyzing the internal representations of Llama-3.1-8B-Instruct and Gemma-2-9B-Instruct on a factual verification task, we find the model’s instructed True/False output is predictable via linear probes across all conditions based on the internal representation. Further, we use Sparse Autoencoders (SAEs) to show that the Deceptive instructions induce significant representational shifts compared to Truthful/Neutral representations (which are similar), concentrated in early-to-mid layers and detectable even on complex datasets. We also identify specific SAE features highly sensitive to deceptive instruction and use targeted visualizations to confirm distinct truthful/deceptive representational subspaces.
Quantized but Deceptive? A Multi-Dimensional Truthfulness Evaluation of Quantized LLMs
Yao Fu | Xianxuan Long | Runchao Li | Haotian Yu | Mu Sheng | Xiaotian Han | Yu Yin | Pan Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yao Fu | Xianxuan Long | Runchao Li | Haotian Yu | Mu Sheng | Xiaotian Han | Yu Yin | Pan Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Quantization enables efficient deployment of large language models (LLMs) in resource-constrained environments by significantly reducing memory and computation costs. While quantized LLMs often maintain performance on perplexity and zero-shot tasks, their impact on truthfulness—whether generating truthful or deceptive responses—remains largely unexplored. In this work, we introduce TruthfulnessEval, a comprehensive evaluation framework for assessing the truthfulness of quantized LLMs across three dimensions: (1) Truthfulness on Logical Reasoning; (2) Truthfulness on Common Sense; and (3) Truthfulness on Imitative Falsehoods. Using this framework, we examine mainstream quantization techniques (ranging from 4-bit to extreme 2-bit) across several open-source LLMs. Surprisingly, we find that while quantized models retain internally truthful representations, they are more susceptible to producing false outputs under misleading prompts. To probe this vulnerability, we test 15 rephrased variants of “honest”, “neutral” and “deceptive” prompts and observe that “deceptive” prompts can override truth-consistent behavior, whereas “honest” and “neutral” prompts maintain stable outputs. Further, we reveal that quantized models “know” the truth internally yet still produce false outputs when guided by “deceptive” prompts via layer-wise probing and PCA visualizations. Our findings provide insights into future designs of quantization-aware alignment and truthfulness interventions.
FAEDKV: Infinite-Window Fourier Transform for Unbiased KV Cache Compression
Runchao Li | Yao Fu | Mu Sheng | Xianxuan Long | Haotian Yu | Pan Li
Findings of the Association for Computational Linguistics: EMNLP 2025
Runchao Li | Yao Fu | Mu Sheng | Xianxuan Long | Haotian Yu | Pan Li
Findings of the Association for Computational Linguistics: EMNLP 2025
The efficacy of Large Language Models (LLMs) in long-context tasks is often hampered by the substantial memory footprint and computational demands of the Key-Value (KV) cache. Current compression strategies, including token eviction and learned projections, frequently lead to biased representations—either by overemphasizing recent/high-attention tokens or by repeatedly degrading information from earlier context—and may require costly model retraining. We present FAEDKV (Frequency-Adaptive Infinite-Window for KV cache), a novel, training-free KV cache compression framework that ensures unbiased information retention. FAEDKV operates by transforming the KV cache into the frequency domain using a proposed Infinite-Window Fourier Transform (IWDFT). This approach allows for the equalized contribution of all tokens to the compressed representation, effectively preserving both early and recent contextual information. A preliminary frequency ablation study identifies critical spectral components for layer-wise, targeted compression. Experiments on LongBench benchmark demonstrate FAEDKV’s superiority over existing methods by up to 22%. In addition, our method shows superior, position-agnostic retrieval accuracy on the Needle-In-A-Haystack task compared to compression based approaches.
Pruning Weights but Not Truth: Safeguarding Truthfulness While Pruning LLMs
Yao Fu | Runchao Li | Xianxuan Long | Haotian Yu | Xiaotian Han | Yu Yin | Pan Li
Findings of the Association for Computational Linguistics: EMNLP 2025
Yao Fu | Runchao Li | Xianxuan Long | Haotian Yu | Xiaotian Han | Yu Yin | Pan Li
Findings of the Association for Computational Linguistics: EMNLP 2025
Neural network pruning has emerged as a promising approach for deploying LLMs in low-resource scenarios while preserving downstream task performance. However, for the first time, we reveal that such pruning disrupts LLMs’ internal activation features crucial for lie detection, where probing classifiers (typically small logistic regression models) trained on these features assess the truthfulness of LLM-generated statements. This discovery raises a crucial open question: how can we prune LLMs without sacrificing these critical lie detection capabilities? Our investigation further reveals that naively adjusting layer-wise pruning sparsity based on importance inadvertently removes crucial weights, failing to improve lie detection performance despite its reliance on the most crucial LLM layer. To address this issue, we propose Truthful Pruning aligned by Layer-wise Outliers (TPLO), which places greater emphasis on layers with more activation outliers and stronger discriminative features simultaneously. This preserves LLMs’ original performance while retaining critical features of inner states needed for robust lie detection. Moreover, we introduce a prompting rule to enrich the TruthfulQA benchmark for better calibrating LLM pruning. Empirical results show that our approach improves the hallucination detection for pruned LLMs (achieving 88% accuracy at 50% sparsity) and enhances their performance on TruthfulQA.
2023
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Vishakh Padmakumar | Gisela Vallejo | Yao Fu
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Vishakh Padmakumar | Gisela Vallejo | Yao Fu
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
2022
Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE
Yuling Gu | Yao Fu | Valentina Pyatkin | Ian Magnusson | Bhavana Dalvi Mishra | Peter Clark
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
Yuling Gu | Yao Fu | Valentina Pyatkin | Ian Magnusson | Bhavana Dalvi Mishra | Peter Clark
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
Figurative language (e.g., “he flew like the wind”) is challenging to understand, as it is hard to tell what implicit information is being conveyed from the surface form alone. We hypothesize that to perform this task well, the reader needs to mentally elaborate the scene being described to identify a sensible meaning of the language. We present DREAM-FLUTE, a figurative language understanding system that does this, first forming a “mental model” of situations described in a premise and hypothesis before making an entailment/contradiction decision and generating an explanation. DREAM-FLUTE uses an existing scene elaboration model, DREAM, for constructing its “mental model.” In the FigLang2022 Shared Task evaluation, DREAM-FLUTE achieved (joint) first place (Acc@60=63.3%), and can perform even better with ensemble techniques, demonstrating the effectiveness of this approach. More generally, this work suggests that adding a reflective component to pretrained language models can improve their performance beyond standard fine-tuning (3.3% improvement in Acc@60).
Data-to-text Generation with Variational Sequential Planning
Ratish Puduppully | Yao Fu | Mirella Lapata
Transactions of the Association for Computational Linguistics, Volume 10
Ratish Puduppully | Yao Fu | Mirella Lapata
Transactions of the Association for Computational Linguistics, Volume 10
We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input. We focus on generating long-form text, that is, documents with multiple paragraphs, and propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way. We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Experiments on two data-to-text benchmarks (RotoWire and MLB) show that our model outperforms strong baselines and is sample-efficient in the face of limited training data (e.g., a few hundred instances).
2021
Noisy-Labeled NER with Confidence Estimation
Kun Liu | Yao Fu | Chuanqi Tan | Mosha Chen | Ningyu Zhang | Songfang Huang | Sheng Gao
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Kun Liu | Yao Fu | Chuanqi Tan | Mosha Chen | Ningyu Zhang | Songfang Huang | Sheng Gao
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Recent studies in deep learning have shown significant progress in named entity recognition (NER). However, most existing works assume clean data annotation, while real-world scenarios typically involve a large amount of noises from a variety of sources (e.g., pseudo, weak, or distant annotations). This work studies NER under a noisy labeled setting with calibrated confidence estimation. Based on empirical observations of different training dynamics of noisy and clean labels, we propose strategies for estimating confidence scores based on local and global independence assumptions. We partially marginalize out labels of low confidence with a CRF model. We further propose a calibration method for confidence scores based on the structure of entity labels. We integrate our approach into a self-training framework for boosting performance. Experiments in general noisy settings with four languages and distantly labeled settings demonstrate the effectiveness of our method.
2019
Rethinking Text Attribute Transfer: A Lexical Analysis
Yao Fu | Hao Zhou | Jiaze Chen | Lei Li
Proceedings of the 12th International Conference on Natural Language Generation
Yao Fu | Hao Zhou | Jiaze Chen | Lei Li
Proceedings of the 12th International Conference on Natural Language Generation
Text attribute transfer is modifying certain linguistic attributes (e.g. sentiment, style, author-ship, etc.) of a sentence and transforming them from one type to another. In this paper, we aim to analyze and interpret what is changed during the transfer process. We start from the observation that in many existing models and datasets, certain words within a sentence play important roles in determining the sentence attribute class. These words are referred as the Pivot Words. Based on these pivot words, we propose a lexical analysis framework, the Pivot Analysis, to quantitatively analyze the effects of these words in text attribute classification and transfer. We apply this framework to existing datasets and models and show that: (1) the pivot words are strong features for the classification of sentence attributes; (2) to change the attribute of a sentence, many datasets only requires to change certain pivot words; (3) consequently, many transfer models only perform the lexical-level modification,while leaving higher-level sentence structures unchanged. Our work provides an in-depth understanding of linguistic attribute transfer and further identifies the future requirements and challenges of this task
2018
Natural Answer Generation with Heterogeneous Memory
Yao Fu | Yansong Feng
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Yao Fu | Yansong Feng
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Memory augmented encoder-decoder framework has achieved promising progress for natural language generation tasks. Such frameworks enable a decoder to retrieve from a memory during generation. However, less research has been done to take care of the memory contents from different sources, which are often of heterogeneous formats. In this work, we propose a novel attention mechanism to encourage the decoder to actively interact with the memory by taking its heterogeneity into account. Our solution attends across the generated history and memory to explicitly avoid repetition, and introduce related knowledge to enrich our generated sentences. Experiments on the answer sentence generation task show that our method can effectively explore heterogeneous memory to produce readable and meaningful answer sentences while maintaining high coverage for given answer information.
Search
Fix author
Co-authors
- Runchao Li 4
- Pan Li 4
- Xianxuan Long 4
- Haotian Yu 4
- Xiaotian Han 3
- Mu Sheng 3
- Yu Yin 2
- Mosha Chen 1
- Jiaze Chen 1
- Peter Clark 1
- Bhavana Dalvi 1
- Yansong Feng 1
- Sheng Gao 1
- Yuling Gu 1
- Songfang Huang 1
- Jaekyeom Kim 1
- Mirella Lapata 1
- Honglak Lee 1
- Lei Li 1
- Kun Liu 1
- Ian Magnusson 1
- Vishakh Padmakumar 1
- Sathvika Ayyappa Prabhu 1
- Ratish Puduppully 1
- Valentina Pyatkin 1
- Ran Qiu 1
- Jacob Sansom 1
- Sungryull Sohn 1
- Chuanqi Tan 1
- Huijie Tang 1
- Gisela Vallejo 1
- Xinhe Wang 1
- Ningyu Zhang 1
- Hao Zhou (昊 周) 1