Minho Lee


2024

pdf
Unveiling the Power of Integration: Block Diagram Summarization through Local-Global Fusion
Shreyanshu Bhushan | Eun-Soo Jung | Minho Lee
Findings of the Association for Computational Linguistics: ACL 2024

Block Diagrams play an essential role in visualizing the relationships between components or systems. Generating summaries of block diagrams is important for document understanding or question answering (QA) tasks by providing concise overviews of complex systems. However, it’s a challenging task as it requires compressing complex relationships into informative descriptions. In this paper, we present “BlockNet”, a fusion framework that summarizes block diagrams by integrating local and global information, catering to both English and Korean languages. Additionally, we introduce a new multilingual method to produce block diagram data, resulting in a high-quality dataset called “BD-EnKo”. In BlockNet, we develop “BlockSplit”, an Optical Character Recognition (OCR) based algorithm employing the divide-and-conquer principle for local information extraction. We train an OCR-free transformer architecture for global information extraction using BD-EnKo and public data. To assess the effectiveness of our model, we conduct thorough experiments on different datasets. The assessment shows that BlockNet surpasses all previous methods and models, including GPT-4V, for block diagram summarization.

2023

pdf
Enhancing text comprehension for Question Answering with Contrastive Learning
Seungyeon Lee | Minho Lee
Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)

Although Question Answering (QA) have advanced to the human-level language skills in NLP tasks, there is still a problem: the QA model gets confused when there are similar sentences or paragraphs. Existing studies focus on enhancing the text understanding of the candidate answers to improve the overall performance of the QA models. However, since these methods focus on re-ranking queries or candidate answers, they fail to resolve the confusion when many generated answers are similar to the expected answer. To address these issues, we propose a novel contrastive learning framework called ContrastiveQA that alleviates the confusion problem in answer extraction. We propose a supervised method where we generate positive and negative samples from the candidate answers and the given answer, respectively. We thus introduce ContrastiveQA, which uses contrastive learning with sampling data to reduce incorrect answers. Experimental results on four QA benchmarks show the effectiveness of the proposed method.

2022

pdf
Block Diagram-to-Text: Understanding Block Diagram Images by Generating Natural Language Descriptors
Shreyanshu Bhushan | Minho Lee
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022

Block diagrams are very popular for representing a workflow or process of a model. Understanding block diagrams by generating summaries can be extremely useful in document summarization. It can also assist people in inferring key insights from block diagrams without requiring a lot of perceptual and cognitive effort. In this paper, we propose a novel task of converting block diagram images into text by presenting a framework called “BloSum”. This framework extracts the contextual meaning from the images in the form of triplets that help the language model in summary generation. We also introduce a new dataset for complex computerized block diagrams, explain the dataset preparation process, and later analyze it. Additionally, to showcase the generalization of the model, we test our method with publicly available handwritten block diagram datasets. Our evaluation with different metrics demonstrates the effectiveness of our approach that outperforms other methods and techniques.

pdf
Type-dependent Prompt CycleQAG : Cycle Consistency for Multi-hop Question Generation
Seungyeon Lee | Minho Lee
Proceedings of the 29th International Conference on Computational Linguistics

Multi-hop question generation (QG) is the process of generating answer related questions, which requires aggregating multiple pieces of information and reasoning from different parts of the texts. This is opposed to single-hop QG which generates questions from sentences containing an answer in a given paragraph. Single-hop QG requires no reasoning or complexity, while multi-hop QG often requires logical reasoning to derive an answer related question, making it a dual task. Not enough research has been made on the multi-hop QG due to its complexity. Also, a question should be created using the question type and words related to the correct answer as a prompt so that multi-hop questions can get more information. In this view, we propose a new type-dependent prompt cycleQAG (cyclic question-answer-generation), with a cycle consistency loss in which QG and Question Answering (QA) are learnt in a cyclic manner. The novelty is that the cycle consistency loss uses the negative cross entropy to generate syntactically diverse questions that enable selecting different word representations. Empirical evaluation on the multi-hop dataset with automatic and human evaluation metrics outperforms the baseline model by about 10.38% based on ROUGE score.

2020

pdf
Attentively Embracing Noise for Robust Latent Representation in BERT
Gwenaelle Cunha Sergio | Dennis Singh Moirangthem | Minho Lee
Proceedings of the 28th International Conference on Computational Linguistics

Modern digital personal assistants interact with users through voice. Therefore, they heavily rely on automatic speech recognition (ASR) in order to convert speech to text and perform further tasks. We introduce EBERT, which stands for EmbraceBERT, with the goal of extracting more robust latent representations for the task of noisy ASR text classification. Conventionally, BERT is fine-tuned for downstream classification tasks using only the [CLS] starter token, with the remaining tokens being discarded. We propose using all encoded transformer tokens and further encode them using a novel attentive embracement layer and multi-head attention layer. This approach uses the otherwise discarded tokens as a source of additional information and the multi-head attention in conjunction with the attentive embracement layer to select important features from clean data during training. This allows for the extraction of a robust latent vector resulting in improved classification performance during testing when presented with noisy inputs. We show the impact of our model on both the Chatbot and Snips corpora for intent classification with ASR error. Results, in terms of F1-score and mean between 10 runs, show that our model significantly outperforms the baseline model.

pdf
Enhancing Quality of Corpus Annotation: Construction of the Multi-Layer Corpus Annotation and Simplified Validation of the Corpus Annotation
Youngbin Noh | Kuntae Kim | Minho Lee | Cheolhun Heo | Yongbin Jeong | Yoosung Jeong | Younggyun Hahm | Taehwan Oh | Hyonsu Choe | Seokwon Park | Jin-Dong Kim | Key-Sun Choi
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation

pdf
Effective Crowdsourcing of Multiple Tasks for Comprehensive Knowledge Extraction
Sangha Nam | Minho Lee | Donghwan Kim | Kijong Han | Kuntae Kim | Sooji Yoon | Eun-kyung Kim | Key-Sun Choi
Proceedings of the Twelfth Language Resources and Evaluation Conference

Information extraction from unstructured texts plays a vital role in the field of natural language processing. Although there has been extensive research into each information extraction task (i.e., entity linking, coreference resolution, and relation extraction), data are not available for a continuous and coherent evaluation of all information extraction tasks in a comprehensive framework. Given that each task is performed and evaluated with a different dataset, analyzing the effect of the previous task on the next task with a single dataset throughout the information extraction process is impossible. This paper aims to propose a Korean information extraction initiative point and promote research in this field by presenting crowdsourcing data collected for four information extraction tasks from the same corpus and the training and evaluation results for each task of a state-of-the-art model. These machine learning data for Korean information extraction are the first of their kind, and there are plans to continuously increase the data volume. The test results will serve as an initiative result for each Korean information extraction task and are expected to serve as a comparison target for various studies on Korean information extraction using the data collected in this study.

2018

pdf
Chat Discrimination for Intelligent Conversational Agents with a Hybrid CNN-LMTGRU Network
Dennis Singh Moirangthem | Minho Lee
Proceedings of the Third Workshop on Representation Learning for NLP

Recently, intelligent dialog systems and smart assistants have attracted the attention of many, and development of novel dialogue agents have become a research challenge. Intelligent agents that can handle both domain-specific task-oriented and open-domain chit-chat dialogs are one of the major requirements in the current systems. In order to address this issue and to realize such smart hybrid dialogue systems, we develop a model to discriminate user utterance between task-oriented and chit-chat conversations. We introduce a hybrid of convolutional neural network (CNN) and a lateral multiple timescale gated recurrent units (LMTGRU) that can represent multiple temporal scale dependencies for the discrimination task. With the help of the combined slow and fast units of the LMTGRU, our model effectively determines whether a user will have a chit-chat conversation or a task-specific conversation with the system. We also show that the LMTGRU structure helps the model to perform well on longer text inputs. We address the lack of dataset by constructing a dataset using Twitter and Maluuba Frames data. The results of the experiments demonstrate that the proposed hybrid network outperforms the conventional models on the chat discrimination task as well as performed comparable to the baselines on various benchmark datasets.

2017

pdf
Representing Compositionality based on Multiple Timescales Gated Recurrent Neural Networks with Adaptive Temporal Hierarchy for Character-Level Language Models
Dennis Singh Moirangthem | Jegyung Son | Minho Lee
Proceedings of the 2nd Workshop on Representation Learning for NLP

A novel character-level neural language model is proposed in this paper. The proposed model incorporates a biologically inspired temporal hierarchy in the architecture for representing multiple compositions of language in order to handle longer sequences for the character-level language model. The temporal hierarchy is introduced in the language model by utilizing a Gated Recurrent Neural Network with multiple timescales. The proposed model incorporates a timescale adaptation mechanism for enhancing the performance of the language model. We evaluate our proposed model using the popular Penn Treebank and Text8 corpora. The experiments show that the use of multiple timescales in a Neural Language Model (NLM) enables improved performance despite having fewer parameters and with no additional computation requirements. Our experiments also demonstrate the ability of the adaptive temporal hierarchies to represent multiple compositonality without the help of complex hierarchical architectures and shows that better representation of the longer sequences lead to enhanced performance of the probabilistic language model.

2016

pdf
Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent Unit for Summarization
Minsoo Kim | Dennis Singh Moirangthem | Minho Lee
Proceedings of the 1st Workshop on Representation Learning for NLP