Kevin Han
2026
TDFlow: Agentic Workflows for Test Driven Development
Kevin Han | Siddharth Maddikayala | Tim Knappe | Om Patel | Austen Liao | Amir Barati Farimani
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Kevin Han | Siddharth Maddikayala | Tim Knappe | Om Patel | Austen Liao | Amir Barati Farimani
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We introduce TDFlow, a novel test-driven agentic workflow that frames repository-scale software engineering as a test-resolution task, specifically designed to solve human-written tests. Given a set of tests, TDFlow repeatedly proposes, revises, and debugs repository-scale patches using precisely engineered sub-agents and tightly constrained tools. The workflow decomposes software engineering program repair into four components governed by respective sub-agents. This simple, forced decoupling of patch proposing, debugging, patch revision, and optional test generation (1) reduces long-context burden on any individual sub-agent, (2) focuses each sub-agent on specific, pre-defined sub-tasks, and (3) allows for specialized performance improvement on specific sub-tasks. When provided human-written tests, TDFlow attains 88.8% pass rate on SWE-Bench Lite (an absolute improvement of 27.8% over the next best baseline) and 94.3% on SWE-Bench Verified. In this work, we further show that the primary obstacle to human-level software engineering performance lies within writing successful reproduction tests. Manual inspection of the 800 TDFlow runs within SWE-Bench Lite and Verified uncover only 7 instances of test hacking, which were subsequently counted as failures. We envision a human-LLM interactive system powered by TDFlow where human developers write tests solved by LLM systems. Together, these results show that modern LLMs, when embedded in a narrowly engineered, test-driven workflow, already achieve human-level test resolution – with the final frontier for fully autonomous repository repair being accurate reproduction test generation.
2025
Pragmatic Metacognitive Prompting Improves LLM Performance on Sarcasm Detection
Joshua Lee | Wyatt Fong | Alexander Le | Sur Shah | Kevin Han | Kevin Zhu
Proceedings of the 1st Workshop on Computational Humor (CHum)
Joshua Lee | Wyatt Fong | Alexander Le | Sur Shah | Kevin Han | Kevin Zhu
Proceedings of the 1st Workshop on Computational Humor (CHum)
Sarcasm detection is a significant challenge in sentiment analysis due to the nuanced and context-dependent nature of verbiage. We introduce Pragmatic Metacognitive Prompting (PMP) to improve the performance of Large Language Models (LLMs) in sarcasm detection, which leverages principles from pragmatics and reflection helping LLMs interpret implied meanings, consider contextual cues, and reflect on discrepancies to identify sarcasm. Using state-of-the-art LLMs such as LLaMA-3-8B, GPT-4o, and Claude 3.5 Sonnet, PMP achieves state-of-the-art performance on GPT-4o on MUStARD and SemEval2018. This study demonstrates that integrating pragmatic reasoning and metacognitive strategies into prompting significantly enhances LLMs’ ability to detect sarcasm, offering a promising direction for future research in sentiment analysis.