Sunghwan Mac Kim

Also published as: Sunghwan Kim


2025

pdf bib
Rethinking Reward Model Evaluation Through the Lens of Reward Overoptimization
Sunghwan Kim | Dongjin Kang | Taeyoon Kwon | Hyungjoo Chae | Dongha Lee | Jinyoung Yeo
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Reward models (RMs) play a crucial role in reinforcement learning from human feedback (RLHF), aligning model behavior with human preferences. However, existing benchmarks for reward models show a weak correlation with the performance of optimized policies, suggesting that they fail to accurately assess the true capabilities of RMs. To bridge this gap, we explore several evaluation designs through the lens of reward overoptimization, i.e., a phenomenon that captures both how well the reward model aligns with human preferences and the dynamics of the learning signal it provides to the policy. The results highlight three key findings on how to construct a reliable benchmark: (i) it is important to minimize differences between chosen and rejected responses beyond correctness, (ii) evaluating reward models requires multiple comparisons across a wide range of chosen and rejected responses, and (iii) given that reward models encounter responses with diverse representations, responses should be sourced from a variety of models. However, we also observe that a extremely high correlation with degree of overoptimization leads to comparatively lower correlation with certain downstream performance. Thus, when designing a benchmark, it is desirable to use the degree of overoptimization as a useful tool, rather than the end goal.

pdf bib
LLM Meets Scene Graph: Can Large Language Models Understand and Generate Scene Graphs? A Benchmark and Empirical Study
Dongil Yang | Minjin Kim | Sunghwan Kim | Beong-woo Kwak | Minjun Park | Jinseok Hong | Woontack Woo | Jinyoung Yeo
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The remarkable reasoning and generalization capabilities of Large Language Models (LLMs) have paved the way for their expanding applications in embodied AI, robotics, and other real-world tasks. To effectively support these applications, grounding in spatial and temporal understanding in multimodal environments is essential. To this end, recent works have leveraged scene graphs, a structured representation that encodes entities, attributes, and their relationships in a scene. However, a comprehensive evaluation of LLMs’ ability to utilize scene graphs remains limited. In this work, we introduce Text-Scene Graph (TSG) Bench, a benchmark designed to systematically assess LLMs’ ability to (1) understand scene graphs and (2) generate them from textual narratives. With TSG Bench we evaluate 11 LLMs and reveal that, while models perform well on scene graph understanding, they struggle with scene graph generation, particularly for complex narratives. Our analysis indicates that these models fail to effectively decompose discrete scenes from a complex narrative, leading to a bottleneck when generating scene graphs. These findings underscore the need for improved methodologies in scene graph generation and provide valuable insights for future research. The demonstration of our benchmark is available at https://tsg-bench.netlify.app. Additionally, our code and evaluation data are publicly available at https://github.com/docworlds/tsg-bench.

2024

pdf bib
Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation
Dongjin Kang | Sunghwan Kim | Taeyoon Kwon | Seungjun Moon | Hyunsouk Cho | Youngjae Yu | Dongha Lee | Jinyoung Yeo
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Emotional Support Conversation (ESC) is a task aimed at alleviating individuals’ emotional distress through daily conversation. Given its inherent complexity and non-intuitive nature, ESConv dataset incorporates support strategies to facilitate the generation of appropriate responses. Recently, despite the remarkable conversational ability of large language models (LLMs), previous studies have suggested that they often struggle with providing useful emotional support. Hence, this work initially analyzes the results of LLMs on ESConv, revealing challenges in selecting the correct strategy and a notable preference for a specific strategy. Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy. Moreover, we conduct a methodological study to offer insights into the necessary approaches for LLMs to serve as proficient emotional supporters. Our findings emphasize that (1) low preference for specific strategies hinders the progress of emotional support, (2) external assistance helps reduce preference bias, and (3) existing LLMs alone cannot become good emotional supporters. These insights suggest promising avenues for future research to enhance the emotional intelligence of LLMs.

pdf bib
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
Hyungjoo Chae | Yeonghyeon Kim | Seungone Kim | Kai Tzu-iunn Ong | Beong-woo Kwak | Moohyeon Kim | Sunghwan Kim | Taeyoon Kwon | Jiwan Chung | Youngjae Yu | Jinyoung Yeo
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Algorithmic reasoning tasks that involve complex logical patterns, such as completing Dyck language, pose challenges for large language models (LLMs), despite their recent success. Prior work has used LLMs to generate programming language and applied external compilers for such tasks. Yet, when on the fly, it is hard to generate an executable code with the correct logic for the solution. Even so, code for one instance cannot be reused for others, although they might require the same logic to solve. We present Think-and-Execute, a novel framework that improves LLMs’ algorithmic reasoning: (1) In Think, we discover task-level logic shared across all instances, and express such logic with pseudocode; (2) In Execute, we tailor the task-level pseudocode to each instance and simulate the execution of it. Think-and-Execute outperforms several strong baselines (including CoT and PoT) in diverse algorithmic reasoning tasks. We manifest the advantage of using task-level pseudocode over generating instance-specific solutions one by one. Also, we show that pseudocode can better improve LMs’ reasoning than natural language (NL) guidance, even though they are trained with NL instructions.

pdf bib
Cactus: Towards Psychological Counseling Conversations using Cognitive Behavioral Theory
Suyeon Lee | Sunghwan Kim | Minju Kim | Dongjin Kang | Dongil Yang | Harim Kim | Minseok Kang | Dayi Jung | Min Hee Kim | Seungbeen Lee | Kyong-Mee Chung | Youngjae Yu | Dongha Lee | Jinyoung Yeo
Findings of the Association for Computational Linguistics: EMNLP 2024

Recently, the demand for psychological counseling has significantly increased as more individuals express concerns about their mental health. This surge has accelerated efforts to improve the accessibility of counseling by using large language models (LLMs) as counselors. To ensure client privacy, training open-source LLMs faces a key challenge: the absence of realistic counseling datasets. To address this, we introduce Cactus, a multi-turn dialogue dataset that emulates real-life interactions using the goal-oriented and structured approach of Cognitive Behavioral Therapy (CBT).We create a diverse and realistic dataset by designing clients with varied, specific personas, and having counselors systematically apply CBT techniques in their interactions. To assess the quality of our data, we benchmark against established psychological criteria used to evaluate real counseling sessions, ensuring alignment with expert evaluations.Experimental results demonstrate that Camel, a model trained with Cactus, outperforms other models in counseling skills, highlighting its effectiveness and potential as a counseling agent.We make our data, model, and code publicly available.

2018

pdf bib
Proceedings of the Australasian Language Technology Association Workshop 2018
Sunghwan Mac Kim | Xiuzhen (Jenny) Zhang
Proceedings of the Australasian Language Technology Association Workshop 2018

2017

pdf bib
Demographic Inference on Twitter using Recursive Neural Networks
Sunghwan Mac Kim | Qiongkai Xu | Lizhen Qu | Stephen Wan | Cécile Paris
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one’s audience. Most previous work has made independence assumptions over topological, textual and label information on social networks. In this work, we employ recursive neural networks to break down these independence assumptions to obtain inference about demographic characteristics on Twitter. We show that our model performs better than existing models including the state-of-the-art.

2016

pdf bib
Data61-CSIRO systems at the CLPsych 2016 Shared Task
Sunghwan Mac Kim | Yufei Wang | Stephen Wan | Cécile Paris
Proceedings of the Third Workshop on Computational Linguistics and Clinical Psychology

pdf bib
The Effects of Data Collection Methods in Twitter
Sunghwan Mac Kim | Stephen Wan | Cécile Paris | Brian Jin | Bella Robinson
Proceedings of the First Workshop on NLP and Computational Social Science

pdf bib
Detecting Social Roles in Twitter
Sunghwan Mac Kim | Stephen Wan | Cécile Paris
Proceedings of the Fourth International Workshop on Natural Language Processing for Social Media

2015

pdf bib
Finding Names in Trove: Named Entity Recognition for Australian Historical Newspapers
Sunghwan Mac Kim | Steve Cassidy
Proceedings of the Australasian Language Technology Association Workshop 2015

2014

pdf bib
The Effect of Dependency Representation Scheme on Syntactic Language Modelling
Sunghwan Kim | John Pate | Mark Johnson
Proceedings of the Australasian Language Technology Association Workshop 2014

2012

pdf bib
Improving Combinatory Categorial Grammar Parse Reranking with Dependency Grammar Features
Sunghwan Mac Kim | Dominick Ng | Mark Johnson | James Curran
Proceedings of COLING 2012

2010

pdf bib
Evaluation of Unsupervised Emotion Models to Textual Affect Recognition
Sunghwan Mac Kim | Alessandro Valitutti | Rafael A. Calvo
Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text