2025
pdf
bib
abs
Time-MQA: Time Series Multi-Task Question Answering with Context Enhancement
Yaxuan Kong
|
Yiyuan Yang
|
Yoontae Hwang
|
Wenjie Du
|
Stefan Zohren
|
Zhangyang Wang
|
Ming Jin
|
Qingsong Wen
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Time series data are foundational in finance, healthcare, and energy domains. However, most existing methods and datasets remain focused on a narrow spectrum of tasks, such as forecasting or anomaly detection. To bridge this gap, we introduce Time Series Multi-Task Question Answering (Time-MQA), a unified framework that enables natural language queries across multiple time series tasks - numerical analytical tasks and open-ended question answering with reasoning. Central to Time-MQA is the TSQA dataset, a large-scale dataset containing ~200k question-answer pairs derived from diverse time series spanning environment, traffic, etc. This comprehensive resource covers various time series lengths and promotes robust model development. We further demonstrate how continually pre-training large language models (Mistral 7B, Llama-3 8B, and Qwen-2.5 7B) on the TSQA dataset enhanced time series reasoning capabilities, moving beyond mere numeric tasks and enabling more advanced and intuitive interactions with temporal data. The complete TSQA dataset, models, user study questionnaires for evaluation, and other related materials have been open-sourced here.
pdf
bib
abs
DiPT: Enhancing LLM Reasoning through Diversified Perspective-Taking
Hoang Anh Just
|
Mahavir Dabas
|
Lifu Huang
|
Ming Jin
|
Ruoxi Jia
Findings of the Association for Computational Linguistics: NAACL 2025
Existing work on improving language model reasoning typically explores a single solution path, which can be prone to errors. Inspired by perspective-taking in social studies, this paper introduces DiPT, a novel approach that complements current reasoning methods by explicitly incorporating diversified viewpoints. This approach allows the model to gain a deeper understanding of the problem’s context and identify the most effective solution path during the inference stage. Additionally, it provides a general data-centric AI recipe for augmenting existing data to improve their quality for fine-tuning. Our empirical results demonstrate that DiPT can be flexibly integrated into existing methods that focus on a single reasoning approach, enhancing their reasoning performance and stability when presented with paraphrased problems. Furthermore, we illustrate improved context understanding by maintaining the model’s safe outputs against “jailbreaking” prompts intentionally designed to bypass safeguards built into deployed models. Lastly, we show that fine-tuning with data enriched with diverse perspectives can boost the reasoning capabilities of the model compared to fine-tuning with raw data alone.
2024
pdf
bib
abs
Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs
Bilgehan Sel
|
Priya Shanmugasundaram
|
Mohammad Kachuee
|
Kun Zhou
|
Ruoxi Jia
|
Ming Jin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) have shown remarkable capabilities in tasks such as summarization, arithmetic reasoning, and question answering. However, they encounter significant challenges in the domain of moral reasoning and ethical decision-making, especially in complex scenarios with multiple stakeholders. This paper introduces the Skin-in-the-Game (SKIG) framework, aimed at enhancing moral reasoning in LLMs by exploring decisions’ consequences from multiple stakeholder perspectives. The core components of the framework consist of simulating accountability for decisions, conducting empathy exercises on different stakeholders, and evaluating the risks associated with the impacts of potential actions. We study SKIG’s performance across various moral reasoning benchmarks with proprietary and open-source LLMs, and investigate its crucial components through extensive ablation analyses. Our framework exhibits marked improvements in performance compared to baselines across different language models and benchmarks.
pdf
bib
abs
Can We Trust the Performance Evaluation of Uncertainty Estimation Methods in Text Summarization?
Jianfeng He
|
Runing Yang
|
Linlin Yu
|
Changbin Li
|
Ruoxi Jia
|
Feng Chen
|
Ming Jin
|
Chang-Tien Lu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Text summarization, a key natural language generation (NLG) task, is vital in various domains. However, the high cost of inaccurate summaries in risk-critical applications, particularly those involving human-in-the-loop decision-making, raises concerns about the reliability of uncertainty estimation on text summarization (UE-TS) evaluation methods. This concern stems from the dependency of uncertainty model metrics on diverse and potentially conflicting NLG metrics. To address this issue, we introduce a comprehensive UE-TS benchmark incorporating 31 NLG metrics across four dimensions. The benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model on three datasets, with human-annotation analysis incorporated where applicable. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of UE-TS techniques. Our code and data are available: https://github.com/he159ok/Benchmark-of-Uncertainty-Estimation-Methods-in-Text-Summarization.
pdf
bib
abs
InternalInspector I2: Robust Confidence Estimation in LLMs through Internal States
Mohammad Beigi
|
Ying Shen
|
Runing Yang
|
Zihao Lin
|
Qifan Wang
|
Ankith Mohan
|
Jianfeng He
|
Ming Jin
|
Chang-Tien Lu
|
Lifu Huang
Findings of the Association for Computational Linguistics: EMNLP 2024
Despite their vast capabilities, Large Language Models (LLMs) often struggle with generating reliable outputs, frequently producing high-confidence inaccuracies known as hallucinations. Addressing this challenge, our research introduces InternalInspector, a novel framework designed to enhance confidence estimation in LLMs by leveraging contrastive learning on internal states including attention states, feed-forward states, and activation states of all layers. Unlike existing methods that primarily focus on the final activation state, InternalInspector conducts a comprehensive analysis across all internal states of every layer to accurately identify both correct and incorrect prediction processes. By benchmarking InternalInspector against existing confidence estimation methods across various natural language understanding and generation tasks, including factual question answering, commonsense reasoning, and reading comprehension, InternalInspector achieves significantly higher accuracy in aligning the estimated confidence scores with the correctness of the LLM’s predictions and lower calibration error. Furthermore, InternalInspector excels at HaluEval, a hallucination detection benchmark, outperforming other internal-based confidence estimation methods in this task.