Hyopil Shin


2024

pdf
Enhancing Self-Attention via Knowledge Fusion: Deriving Sentiment Lexical Attention from Semantic-Polarity Scores
Dongjun Jang | Jinwoong Kim | Hyopil Shin
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

In recent years, pre-trained language models have demonstrated exceptional performance across various natural language processing (NLP) tasks. One fundamental component of these models is the self-attention mechanism, which has played a vital role in capturing meaningful relationships between tokens. However, a question still remains as to whether injecting lexical features into the self-attention mechanism can further enhance the understanding and performance of language models. This paper presents a novel approach for injecting semantic-polarity knowledge, referred to as Sentiment Lexical Attention, directly into the self-attention mechanism of Transformer-based models. The primary goal is to improve performance on sentiment classification task. Our approach involves consistently injecting Sentiment Lexical Attention derived from the lexicon corpus into the attention scores throughout the training process. We empirically evaluate our method on the NSMC sentiment classification benchmark, showcasing significant performance improvements and achieving state-of-the-art results. Furthermore, our approach demonstrates robustness and effectiveness in out-of-domain tasks, indicating its potential for broad applicability. Additionally, we analyze the impact of Sentiment Lexical Attention on the view of the CLS token’s attention distribution. Our method offers a fresh perspective on synergizing lexical features and attention scores, thereby encouraging further investigations in the realm of knowledge injection utilizing the lexical features.

pdf
A Study on How Attention Scores in the BERT Model Are Aware of Lexical Categories in Syntactic and Semantic Tasks on the GLUE Benchmark
Dongjun Jang | Sungjoo Byun | Hyopil Shin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

This study examines whether the attention scores between tokens in the BERT model significantly vary based on lexical categories during the fine-tuning process for downstream tasks. Drawing inspiration from the notion that in human language processing, syntactic and semantic information is parsed differently, we categorize tokens in sentences according to their lexical categories and focus on changes in attention scores among these categories. Our hypothesis posits that in downstream tasks that prioritize semantic information, attention scores centered on content words are enhanced, while in cases emphasizing syntactic information, attention scores centered on function words are intensified. Through experimentation conducted on six tasks from the GLUE benchmark dataset, we substantiate our hypothesis regarding the fine-tuning process. Furthermore, our additional investigations reveal the presence of BERT layers that consistently assign more bias to specific lexical categories, irrespective of the task, highlighting the existence of task-agnostic lexical category preferences.

pdf
KIT-19: A Comprehensive Korean Instruction Toolkit on 19 Tasks for Fine-Tuning Korean Large Language Models
Dongjun Jang | Sungjoo Byun | Hyemi Jo | Hyopil Shin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Instruction Tuning on Large Language Models is an essential process for model to function well and achieve high performance in the specific tasks. Accordingly, in mainstream languages such as English, instruction-based datasets are being constructed and made publicly available. In the case of Korean, publicly available models and datasets all rely on using the output of ChatGPT or translating datasets built in English. In this paper, We introduce KIT-19 as an instruction dataset for the development of LLM in Korean. KIT-19 is a dataset created in an instruction format, comprising 19 existing open-source datasets for Korean NLP tasks. In this paper, we train a Korean Pretrained LLM using KIT-19 to demonstrate its effectiveness. The experimental results show that the model trained on KIT-19 significantly outperforms existing Korean LLMs. Based on the its quality and empirical results, this paper proposes that KIT-19 has the potential to make a substantial contribution to the future improvement of Korean LLMs’ performance.

pdf
Korean Bio-Medical Corpus (KBMC) for Medical Named Entity Recognition
Sungjoo Byun | Jiseung Hong | Sumin Park | Dongjun Jang | Jean Seo | Minseok Kim | Chaeyoung Oh | Hyopil Shin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Named Entity Recognition (NER) plays a pivotal role in medical Natural Language Processing (NLP). Yet, there has not been an open-source medical NER dataset specifically for the Korean language. To address this, we utilized ChatGPT to assist in constructing the KBMC (Korean Bio-Medical Corpus), which we are now presenting to the public. With the KBMC dataset, we noticed an impressive 20% increase in medical NER performance compared to models trained on general Korean NER datasets. This research underscores the significant benefits and importance of using specialized tools and datasets, like ChatGPT, to enhance language processing in specialized fields such as healthcare.

2021

pdf
The Korean Morphologically Tight-Fitting Tokenizer for Noisy User-Generated Texts
Sangah Lee | Hyopil Shin
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

User-generated texts include various types of stylistic properties, or noises. Such texts are not properly processed by existing morpheme analyzers or language models based on formal texts such as encyclopedias or news articles. In this paper, we propose a simple morphologically tight-fitting tokenizer (K-MT) that can better process proper nouns, coinages, and internet slang among other types of noise in Korean user-generated texts. We tested our tokenizer by performing classification tasks on Korean user-generated movie reviews and hate speech datasets, and the Korean Named Entity Recognition dataset. Through our tests, we found that K-MT is better fit to process internet slangs, proper nouns, and coinages, compared to a morpheme analyzer and a character-level WordPiece tokenizer.

pdf
Generating Slogans with Linguistic Features using Sequence-to-Sequence Transformer
Yeoun Yi | Hyopil Shin
Proceedings of the 18th International Conference on Natural Language Processing (ICON)

Previous work generating slogans depended on templates or summaries of company descriptions, making it difficult to generate slogans with linguistic features. We present LexPOS, a sequence-to-sequence transformer model that generates slogans given phonetic and structural information. Our model searches for phonetically similar words given user keywords. Both the sound-alike words and user keywords become lexical constraints for generation. For structural repetition, we use POS constraints. Users can specify any repeated phrase structure by POS tags. Our model-generated slogans are more relevant to the original slogans than those of baseline models. They also show phonetic and structural repetition during inference, representative features of memorable slogans.

2018

pdf
Metaphor Identification with Paragraph and Word Vectorization: An Attention-Based Neural Approach
Timour Igamberdiev | Hyopil Shin
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation

pdf
Grapheme-level Awareness in Word Embeddings for Morphologically Rich Languages
Suzi Park | Hyopil Shin
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2015

pdf
Do We Really Need Lexical Information? Towards a Top-down Approach to Sentiment Analysis of Product Reviews
Yulia Otmakhova | Hyopil Shin
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf
Identification of Implicit Topics in Twitter Data Not Containing Explicit Search Queries
Suzi Park | Hyopil Shin
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf
KOSAC: A Full-Fledged Korean Sentiment Analysis Corpus
Hayeon Jang | Munhyong Kim | Hyopil Shin
Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC 27)

pdf
Romanization-based Approach to Morphological Analysis in Korean SMS Text Processing
Youngsam Kim | Hyopil Shin
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf
Applying Graph-based Keyword Extraction to Document Retrieval
Youngsam Kim | Munhyong Kim | Andrew Cattle | Julia Otmakhova | Suzi Park | Hyopil Shin
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf
Annotation Scheme for Constructing Sentiment Corpus in Korean
Hyopil Shin | Munhyong Kim | Hayeon Jang | Andrew Cattle
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

2010

pdf
Language-Specific Sentiment Analysis in Morphologically Rich Languages
Hayeon Jang | Hyopil Shin
Coling 2010: Posters

pdf
Effective Use of Linguistic Features for Sentiment Analysis of Korean
Hayeon Jang | Hyopil Shin
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

pdf
The KOLON System: Tools for Ontological Natural Language Processing in Korean
Juliano Paiva Junho | Yumi Jo | Hyopil Shin
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

2009

pdf
KTimeML: Specification of Temporal and Event Expressions in Korean Text
Seohyun Im | Hyunjo You | Hayun Jang | Seungho Nam | Hyopil Shin
Proceedings of the 7th Workshop on Asian Language Resources (ALR7)

pdf
Hybrid N-gram Probability Estimation in Morphologically Rich Languages
Hyopil Shin | Hyunjo You
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2

2000

pdf bib
Using Long Runs as Predictors of Semantic Coherence in a Partial Document Retrieval System
Hyopil Shin | Jerrold F. Stach
NAACL-ANLP 2000 Workshop: Syntactic and Semantic Complexity in Natural Language Processing Systems