Canasai Kruengkrai


2024

pdf
Bridging Textual and Tabular Worlds for Fact Verification: A Lightweight, Attention-Based Model
Shirin Dabbaghi Varnosfaderani | Canasai Kruengkrai | Ramin Yahyapour | Junichi Yamagishi
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

FEVEROUS is a benchmark and research initiative focused on fact extraction and verification tasks involving unstructured text and structured tabular data. In FEVEROUS, existing works often rely on extensive preprocessing and utilize rule-based transformations of data, leading to potential context loss or misleading encodings. This paper introduces a simple yet powerful model that nullifies the need for modality conversion, thereby preserving the original evidence’s context. By leveraging pre-trained models on diverse text and tabular datasets and by incorporating a lightweight attention-based mechanism, our approach efficiently exploits latent connections between different data types, thereby yielding comprehensive and reliable verdict predictions. The model’s modular structure adeptly manages multi-modal information, ensuring the integrity and authenticity of the original evidence are uncompromised. Comparative analyses reveal that our approach exhibits competitive performance, aligning itself closely with top-tier models on the FEVEROUS benchmark.

2023

pdf bib
XFEVER: Exploring Fact Verification across Languages
Yi-Chen Chang | Canasai Kruengkrai | Junichi Yamagishi
Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023)

pdf
Revisiting Pathologies of Neural Models under Input Reduction
Canasai Kruengkrai | Junichi Yamagishi
Findings of the Association for Computational Linguistics: ACL 2023

We revisit the question of why neural models tend to produce high-confidence predictions on inputs that appear nonsensical to humans. Previous work has suggested that the models fail to assign low probabilities to such inputs due to model overconfidence. We evaluate various regularization methods on fact verification benchmarks and find that this problem persists even with well-calibrated or underconfident models, suggesting that overconfidence is not the only underlying cause. We also find that regularizing the models with reduced examples helps improve interpretability but comes with the cost of miscalibration. We show that although these reduced examples are incomprehensible to humans, they can contain valid statistical patterns in the dataset utilized by the model.

2022

pdf
Outlier-Aware Training for Improving Group Accuracy Disparities
Li-Kuang Chen | Canasai Kruengkrai | Junichi Yamagishi
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop

Methods addressing spurious correlations such as Just Train Twice (JTT, Liu et al. 2021) involve reweighting a subset of the training set to maximize the worst-group accuracy. However, the reweighted set of examples may potentially contain unlearnable examples that hamper the model’s learning. We propose mitigating this by detecting outliers to the training set and removing them before reweighting. Our experiments show that our method achieves competitive or better accuracy compared with JTT and can detect and remove annotation errors in the subset being reweighted in JTT.

pdf
Mitigating the Diminishing Effect of Elastic Weight Consolidation
Canasai Kruengkrai | Junichi Yamagishi
Proceedings of the 29th International Conference on Computational Linguistics

Elastic weight consolidation (EWC, Kirkpatrick et al. 2017) is a promising approach to addressing catastrophic forgetting in sequential training. We find that the effect of EWC can diminish when fine-tuning large-scale pre-trained language models on different datasets. We present two simple objective functions to mitigate this problem by rescaling the components of EWC. Experiments on natural language inference and fact-checking tasks indicate that our methods require much smaller values for the trade-off parameters to achieve results comparable to EWC.

2021

pdf
A Multi-Level Attention Model for Evidence-Based Fact Checking
Canasai Kruengkrai | Junichi Yamagishi | Xin Wang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling
Canasai Kruengkrai | Thien Hai Nguyen | Sharifah Mahani Aljunied | Lidong Bing
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate. Current models for jointly learning sentence and token labeling are limited to binary classification. We present a joint model that supports multi-class classification and introduce a simple variant of self-attention that allows the model to learn scaling factors. Our model produces 3.78%, 4.20%, 2.08% improvements in F1 over the BiLSTM-CRF baseline on e-commerce product titles in three different low-resource languages: Vietnamese, Thai, and Indonesian, respectively.

pdf
DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks
Bosheng Ding | Linlin Liu | Lidong Bing | Canasai Kruengkrai | Thien Hai Nguyen | Shafiq Joty | Luo Si | Chunyan Miao
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Data augmentation techniques have been widely used to improve machine learning performance as they facilitate generalization. In this work, we propose a novel augmentation method to generate high quality synthetic data for low-resource tagging tasks with language models trained on the linearized labeled sentences. Our method is applicable to both supervised and semi-supervised settings. For the supervised settings, we conduct extensive experiments on named entity recognition (NER), part of speech (POS) tagging and end-to-end target based sentiment analysis (E2E-TBSA) tasks. For the semi-supervised settings, we evaluate our method on the NER task under the conditions of given unlabeled data only and unlabeled data plus a knowledge base. The results show that our method can consistently outperform the baselines, particularly when the given gold training data are less.

2019

pdf
Learning to Flip the Sentiment of Reviews from Non-Parallel Corpora
Canasai Kruengkrai
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Flipping sentiment while preserving sentence meaning is challenging because parallel sentences with the same content but different sentiment polarities are not always available for model learning. We introduce a method for acquiring imperfectly aligned sentences from non-parallel corpora and propose a model that learns to minimize the sentiment and content losses in a fully end-to-end manner. Our model is simple and offers well-balanced results across two domains: Yelp restaurant and Amazon product reviews.

pdf
Better Exploiting Latent Variables in Text Modeling
Canasai Kruengkrai
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, indicate the generalizability of our method.

2016

pdf
Intra-Sentential Subject Zero Anaphora Resolution using Multi-Column Convolutional Neural Network
Ryu Iida | Kentaro Torisawa | Jong-Hoon Oh | Canasai Kruengkrai | Julien Kloetzer
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf
Recognizing Complex Negation on Twitter
Junta Mizuno | Canasai Kruengkrai | Kiyonori Ohtake | Chikara Hashimoto | Kentaro Torisawa | Julien Kloetzer | Kentaro Inui
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation

2014

pdf
An Example-Based Approach to Difficult Pronoun Resolution
Canasai Kruengkrai | Naoya Inoue | Jun Sugiura | Kentaro Inui
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

2012

pdf bib
Semantic Relation Extraction from a Cultural Database
Canasai Kruengkrai | Virach Sornlertlamvanich | Watchira Buranasing | Thatsanee Charoenporn
Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing

2011

pdf
Simple Discriminative Training for Machine Transliteration
Canasai Kruengkrai | Thatsanee Charoenporn | Virach Sornlertlamvanich
Proceedings of the 3rd Named Entities Workshop (NEWS 2011)

2010

pdf
Adapting Chinese Word Segmentation for Machine Translation Based on Short Units
Yiou Wang | Kiyotaka Uchimoto | Jun’ichi Kazama | Canasai Kruengkrai | Kentaro Torisawa
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In Chinese texts, words composed of single or multiple characters are not separated by spaces, unlike most western languages. Therefore Chinese word segmentation is considered an important first step in machine translation (MT) and its performance impacts MT results. Many factors affect Chinese word segmentations, including the segmentation standards and segmentation strategies. The performance of a corpus-based word segmentation model depends heavily on the quality and the segmentation standard of the training corpora. However, we observed that existing manually annotated Chinese corpora tend to have low segmentation granularity and provide poor morphological information due to the present segmentation standards. In this paper, we introduce a short-unit standard of Chinese word segmentation, which is particularly suitable for machine translation, and propose a semi-automatic method of transforming the existing corpora into the ones that can satisfy our standards. We evaluate the usefulness of our approach on the basis of translation tasks from the technology newswire domain and the scientific paper domain, and demonstrate that it significantly improves the performance of Chinese-Japanese machine translation (over 1.0 BLEU increase).

2009

pdf
An Error-Driven Word-Character Hybrid Model for Joint Chinese Word Segmentation and POS Tagging
Canasai Kruengkrai | Kiyotaka Uchimoto | Jun’ichi Kazama | Yiou Wang | Kentaro Torisawa | Hitoshi Isahara
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2006

pdf
A Conditional Random Field Framework for Thai Morphological Analysis
Canasai Kruengkrai | Virach Sornlertlamvanich | Hitoshi Isahara
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper presents a framework for Thai morphological analysis based on the theoretical background of conditional random fields. We formulate morphological analysis of an unsegmented language as the sequential supervised learning problem. Given a sequence of characters, all possibilities of word/tag segmentation are generated, and then the optimal path is selected with some criterion. We examine two different techniques, including the Viterbi score and the confidence estimation. Preliminary results are given to show the feasibility of our proposed framework.

pdf
Word Knowledge Acquisition for Computational Lexicon Construction
Thatsanee Charoenporn | Canasai Kruengkrai | Thanaruk Theeramunkong | Virach Sornlertlamvanich | Hitoshi Isahara
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

The growing of multilingual information processing technology has created the need of linguistic resources, especially lexical database. Many attempts were put to alter the traditional dictionary to computational dictionary, or widely named as computational lexicon. TCL’s Computational Lexicon (TCLLEX) is a recent development of a large-scale Thai Lexicon, which aims to serve as a fundamental linguistic resource for natural language processing research. We design either terminology or ontology for structuring the lexicon based on the idea of computability and reusability.

2005

pdf
Analysis of an Iterative Algorithm for Term-Based Ontology Alignment
Shisanu Tongchim | Canasai Kruengkrai | Virach Sornlertlamvanich | Prapass Srichaivattana | Hitoshi Isahara
Second International Joint Conference on Natural Language Processing: Full Papers

2004

pdf
Enriching a Thai Lexical Database with Selectional Preferences
Canasai Kruengkrai | Thatsanee Charoenporn | Virach Sornlertlamvanich | Hitoshi Isahara
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2003

pdf bib
A Practical Text Summarizer by Paragraph Extraction for Thai
Chuleerat Jaruskulchai | Canasai Kruengkrai
Proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages