Natthawut Kertkeidkachorn


2023

pdf
Text Generation Model Enhanced with Semantic Information in Aspect Category Sentiment Analysis
Tu Tran | Kiyoaki Shirai | Natthawut Kertkeidkachorn
Findings of the Association for Computational Linguistics: ACL 2023

Aspect Category Sentiment Analysis (ACSA) is one of the main subtasks of sentiment analysis, which aims at predicting polarity over a given aspect category. Recently, generative methods emerge as an efficient way to utilize a pre-trained language model for solving ACSA. However, those methods fail to model relations of target words and opinion words in a sentence including multiple aspects. To tackle this problem, this paper proposes a method to incorporate Abstract Meaning Representation (AMR), which describes semantic representation of a sentence as a directed graph, into a text generation model. Furthermore, two regularizers are designed to guide cross attention weights allocation over AMR graphs. One is the identical regularizer that constrains attention weights of aligned nodes, the other is the entropy regularizer that helps the decoder generate tokens by heavily considering only a few related nodes in the AMR graph. Experimental results on three datasets show that the proposed method outperforms state-of-the-art methods, proving the effectiveness of our model.

pdf
Sentiment Analysis using the Relationship between Users and Products
Natthawut Kertkeidkachorn | Kiyoaki Shirai
Findings of the Association for Computational Linguistics: ACL 2023

In product reviews, user and product aspects are useful in sentiment analysis. Nevertheless, previous studies mainly focus on modeling user and product aspects without considering the relationship between users and products. The relationship between users and products is typically helpful in estimating the bias of a user toward a product. In this paper, we, therefore, introduce the Graph Neural Network-based model with the pre-trained Language Model (GNNLM), where the relationship between users and products is incorporated. We conducted experiments on three well-known benchmarks for sentiment classification with the user and product information. The experimental results show that the relationship between users and products improves the performance of sentiment analysis. Furthermore, GNNLM achieves state-of-the-art results on yelp-2013 and yelp-2014 datasets.

2022

pdf
KIQA: Knowledge-Infused Question Answering Model for Financial Table-Text Data
Rungsiman Nararatwong | Natthawut Kertkeidkachorn | Ryutaro Ichise
Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures

While entity retrieval models continue to advance their capabilities, our understanding of their wide-ranging applications is limited, especially in domain-specific settings. We highlighted this issue by using recent general-domain entity-linking models, LUKE and GENRE, to inject external knowledge into a question-answering (QA) model for a financial QA task with a hybrid tabular-textual dataset. We found that both models improved the baseline model by 1.57% overall and 8.86% on textual data. Nonetheless, the challenge remains as they still struggle to handle tabular inputs. We subsequently conducted a comprehensive attention-weight analysis, revealing how LUKE utilizes external knowledge supplied by GENRE. The analysis also elaborates how the injection of symbolic knowledge can be helpful and what needs further improvement, paving the way for future research on this challenging QA task and advancing our understanding of how a language model incorporates external knowledge.

pdf
iLab at FinCausal 2022: Enhancing Causality Detection with an External Cause-Effect Knowledge Graph
Ziwei Xu | Rungsiman Nararatwong | Natthawut Kertkeidkachorn | Ryutaro Ichise
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022

The application of span detection grows fast along with the increasing need of understanding the causes and effects of events, especially in the finance domain. However, once the syntactic clues are absent in the text, the model tends to reverse the cause and effect spans. To solve this problem, we introduce graph construction techniques to inject cause-effect graph knowledge for graph embedding. The graph features combining with BERT embedding, then are used to predict the cause effect spans. The results show our proposed graph builder method outperforms the other methods w/wo external knowledge.

pdf
Enhancing Financial Table and Text Question Answering with Tabular Graph and Numerical Reasoning
Rungsiman Nararatwong | Natthawut Kertkeidkachorn | Ryutaro Ichise
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Typical financial documents consist of tables, texts, and numbers. Given sufficient training data, large language models (LM) can learn the tabular structures and perform numerical reasoning well in question answering (QA). However, their performances fall significantly when data and computational resources are limited. This study improves this performance drop by infusing explicit tabular structures through a graph neural network (GNN). We proposed a model developed from the baseline of a financial QA dataset named TAT-QA. The baseline model, TagOp, consists of answer span (evidence) extraction and numerical reasoning modules. As our main contributions, we introduced two components to the model: a GNN-based evidence extraction module for tables and an improved numerical reasoning module. The latter provides a solution to TagOp’s arithmetic calculation problem specific to operations requiring number ordering, such as subtraction and division, which account for a large portion of numerical reasoning. Our evaluation shows that the graph module has the advantage in low-resource settings, while the improved numerical reasoning significantly outperforms the baseline model.

2020

pdf
Text-to-Text Pre-Training Model with Plan Selection for RDF-to-Text Generation
Natthawut Kertkeidkachorn | Hiroya Takamura
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)

We report our system description for the RDFto-Text task in English on the WebNLG 2020 Challenge. Our approach consists of two parts: 1) RDF-to-Text Generation Pipeline and 2) Plan Selection. RDF-to-Text Generation Pipeline is built on the state-of-the-art pretraining model, while Plan Selection helps decide the proper plan into the pipeline.

2014

pdf
Using Tone Information in Thai Spelling Speech Recognition
Natthawut Kertkeidkachorn | Proadpran Punyabukkana | Atiwong Suchato
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

pdf
CHULA TTS: A Modularized Text-To-Speech Framework
Natthawut Kertkeidkachorn | Supadaech Chanjaradwichai | Proadpran Punyabukkana | Atiwong Suchato
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing