2025
pdf
bib
abs
Does the Emotional Understanding of LVLMs Vary Under High-Stress Environments and Across Different Demographic Attributes?
Jaewook Lee
|
Yeajin Jang
|
Oh-Woog Kwon
|
Harksoo Kim
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
According to psychological and neuroscientific research, a high-stress environment can restrict attentional resources and intensify negative affect, thereby impairing the ability to understand emotions. Furthermore, demographic attributes such as race, gender, and age group have been repeatedly reported to cause significant differences in emotional expression and recognition. This study is the first to systematically verify whether these psychological findings observed in humans also apply to the latest Large Vision Language Models (LVLMs). We constructed low-stress versus high-stress environments and generated an image dataset (a total of 540 images) that combines race, gender, and age group. Based on this, we applied the Pretend prompt technique to induce LVLMs to interpret others’ emotions from the standpoint of the assigned environment and persona. An analysis of the models’ emotional understanding ability, using EQ-Bench-based metrics, revealed that (1) under high-stress environments, the accuracy of emotion understanding significantly declined in most LVLMs, and (2) performance disparities were confirmed across race, gender, and age group. These findings suggest that the effects of high-stress and demographic attributes identified in human research may also be reflected in LVLMs.
pdf
bib
abs
Small Changes, Big Impact: How Manipulating a Few Neurons Can Drastically Alter LLM Aggression
Jaewook Lee
|
Junseo Jang
|
Oh-Woog Kwon
|
Harksoo Kim
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent remarkable advances in Large Language Models (LLMs) have led to innovations in various domains such as education, healthcare, and finance, while also raising serious concerns that they can be easily misused for malicious purposes. Most previous research has focused primarily on observing how jailbreak attack techniques bypass safety mechanisms like Reinforcement Learning through Human Feedback (RLHF). However, whether there are neurons within LLMs that directly govern aggression has not been sufficiently investigated. To fill this gap, this study identifies specific neurons (“aggression neurons”) closely related to the expression of aggression and systematically analyzes how manipulating them affects the model’s overall aggression. Specifically, using a large-scale synthetic text corpus (aggressive and non-aggressive), we measure the activation frequency of each neuron, then apply masking and activation techniques to quantitatively evaluate changes in aggression by layer and by manipulation ratio. Experimental results show that, in all models, manipulating only a small number of neurons can increase aggression by up to 33%, and the effect is even more extreme when aggression neurons are concentrated in certain layers. Moreover, even models of the same scale exhibit nonlinear changes in aggression patterns, suggesting that simple external safety measures alone may not be sufficient for complete defense.
pdf
bib
abs
Do Large Language Models Have “Emotion Neurons”? Investigating the Existence and Role
Jaewook Lee
|
Woojin Lee
|
Oh-Woog Kwon
|
Harksoo Kim
Findings of the Association for Computational Linguistics: ACL 2025
This study comprehensively explores whether there actually exist “emotion neurons” within large language models (LLMs) that selectively process and express certain emotions, and what functional role they play. Drawing on the representative emotion theory of the six basic emotions, we focus on six core emotions. Using synthetic dialogue data labeled with emotions, we identified sets of neurons that exhibit consistent activation patterns for each emotion. As a result, we confirmed that principal neurons handling emotion information do indeed exist within the model, forming distinct groups for each emotion, and that their distribution varies with model size and architectural depth. We then validated the functional significance of these emotion neurons by analyzing whether the prediction accuracy for a specific emotion significantly decreases when those neurons are artificially removed. We observed that in some emotions, the accuracy drops sharply upon neuron removal, while in others, the model’s performance largely remains intact or even improves, presumably due to overlapping and complementary mechanisms among neurons. Furthermore, by examining how prediction accuracy changes depending on which layer range and at what proportion the emotion neurons are masked, we revealed that emotion information is processed in a multilayered and complex manner within the model.
2021
pdf
bib
abs
Document-Grounded Goal-Oriented Dialogue Systems on Pre-Trained Language Model with Diverse Input Representation
Boeun Kim
|
Dohaeng Lee
|
Sihyung Kim
|
Yejin Lee
|
Jin-Xia Huang
|
Oh-Woog Kwon
|
Harksoo Kim
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021)
Document-grounded goal-oriented dialog system understands users’ utterances, and generates proper responses by using information obtained from documents. The Dialdoc21 shared task consists of two subtasks; subtask1, finding text spans associated with users’ utterances from documents, and subtask2, generating responses based on information obtained from subtask1. In this paper, we propose two models (i.e., a knowledge span prediction model and a response generation model) for the subtask1 and the subtask2. In the subtask1, dialogue act losses are used with RoBERTa, and title embeddings are added to input representation of RoBERTa. In the subtask2, various special tokens and embeddings are added to input representation of BART’s encoder. Then, we propose a method to assign different difficulty scores to leverage curriculum learning. In the subtask1, our span prediction model achieved F1-scores of 74.81 (ranked at top 7) and 73.41 (ranked at top 5) in test-dev phase and test phase, respectively. In the subtask2, our response generation model achieved sacreBLEUs of 37.50 (ranked at top 3) and 41.06 (ranked at top 1) in in test-dev phase and test phase, respectively.
2013
pdf
bib
Patent translation as technical document translation: customizing a Chinese-Korean MT system to patent domain
Yun Jin
|
Oh-Woog Kwon
|
Seung-Hoon Na
|
Young-Gil Kim
Proceedings of the 5th Workshop on Patent Translation
2009
pdf
bib
Customizing an English-Korean Machine Translation System for Patent/Technical Documents Translation
Oh-Woog Kwon
|
Sung-Kwon Choi
|
Ki-Young Lee
|
Yoon-Hyung Roh
|
Young-Gil Kim
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2
2008
pdf
bib
How to Overcome the Domain Barriers in Pattern-Based Machine Translation System
Sung-Kwon Choi
|
Ki-Young Lee
|
Yoon-Hyung Roh
|
Oh-Woog Kwon
|
Young-Gil Kim
Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation
pdf
bib
What is Needed the Most in MT-Supported Paper Writing
Chang Hyun Kim
|
Oh-Woog Kwon
|
Young Kil Kim
Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation
pdf
bib
Recognizing Coordinate Structures for Machine Translation of English Patent Documents
Yoon-Hyung Roh
|
Ki-Young Lee
|
Sung-Kwon Choi
|
Oh-Woog Kwon
|
Young-Gil Kim
Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation
2007
pdf
bib
English-Korean patent system: fromTo-EK/PAT
Oh-Woog Kwon
|
Sung-Kwon Choi
|
Ki-Young Lee
|
Yoon-Hyung Roh
|
Young-Gil Kim
|
Munpyo Hong
Proceedings of the Workshop on Patent translation