Liang-Chih Yu

Also published as: Liang-chih Yu


2024

pdf
Instruction Tuning with Retrieval-based Examples Ranking for Aspect-based Sentiment Analysis
Guangmin Zheng | Jin Wang | Liang-Chih Yu | Xuejie Zhang
Findings of the Association for Computational Linguistics ACL 2024

Aspect-based sentiment analysis (ABSA) identifies sentiment information related to specific aspects and provides deeper market insights to businesses and organizations. With the emergence of large language models (LMs), recent studies have proposed using fixed examples for instruction tuning to reformulate ABSA as a generation task. However, the performance is sensitive to the selection of in-context examples; several retrieval methods are based on surface similarity and are independent of the LM generative objective. This study proposes an instruction learning method with retrieval-based example ranking for ABSA tasks. For each target sample, an LM was applied as a scorer to estimate the likelihood of the output given the input and a candidate example as the prompt, and training examples were labeled as positive or negative by ranking the scores. An alternating training schema is proposed to train both the retriever and LM. Instructional prompts can be constructed using high-quality examples. The LM is used for both scoring and inference, improving the generation efficiency without incurring additional computational costs or training difficulties. Extensive experiments on three ABSA subtasks verified the effectiveness of the proposed method, demonstrating its superiority over various strong baseline models. Code and data are released at https://github.com/zgMin/IT-RER-ABSA.

pdf
Improving Personalized Sentiment Representation with Knowledge-enhanced and Parameter-efficient Layer Normalization
You Zhang | Jin Wang | Liang-Chih Yu | Dan Xu | Xuejie Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Existing studies on personalized sentiment classification consider a document review as an overall text unit and incorporate backgrounds (i.e., user and product information) to learn sentiment representation. However, it is difficult when these methods meet the current pretrained language models (PLMs) owing to quadratic costs that increase with text length and heterogeneous mixes of randomly initialized background information and textual information initialized from well-pretrained checkpoints during information incorporation. To address these problems, we propose a knowledge-enhanced and parameter-efficient layer normalization (E2LN) for efficient and effective review modeling via leveraging LN in transformer structures. Initially, a knowledge base is introduced that stores well-pretrained checkpoints, structured text information, and background information. Based on such a knowledge base, the ability of LN can be magnified as being a crucial component of transformer structure and then improve the performance of PLMs in downstream tasks. Moreover, the proposed E2LN can make PLMs capable of modeling long document reviews and incorporating background information with parameter-efficient fine-tuning and knowledge injecting. Extensive experimental results were obtained for three document-level sentiment classification benchmark datasets. By comparing the results, the effectiveness and efficiency of the proposed model was demonstrated. Code and Data are released at https://github.com/yoyo-yun/E2LN.

pdf
SoftMCL: Soft Momentum Contrastive Learning for Fine-grained Sentiment-aware Pre-training
Jin Wang | Liang-Chih Yu | Xuejie Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

The pre-training for language models captures general language understanding but fails to distinguish the affective impact of a particular context to a specific word. Recent works have sought to introduce contrastive learning (CL) for sentiment-aware pre-training in acquiring affective information. Nevertheless, these methods present two significant limitations. First, the compatibility of the GPU memory often limits the number of negative samples, hindering the opportunities to learn good representations. In addition, using only a few sentiment polarities as hard labels, e.g., positive, neutral, and negative, to supervise CL will force all representations to converge to a few points, leading to the issue of latent space collapse. This study proposes a soft momentum contrastive learning (SoftMCL) for fine-grained sentiment-aware pre-training. Instead of hard labels, we introduce valence ratings as soft-label supervision for CL to fine-grained measure the sentiment similarities between samples. The proposed SoftMCL conducts CL on both the word- and sentence-level to enhance the model’s ability to learn affective information. A momentum queue was introduced to expand the contrastive samples, allowing storing and involving more negatives to overcome the limitations of hardware platforms. Extensive experiments were conducted on four different sentiment-related tasks, which demonstrates the effectiveness of the proposed SoftMCL method. The code and data of the proposed SoftMCL is available at: https://www.github.com/wangjin0818/SoftMCL/.

pdf
Overview of the SIGHAN 2024 shared task for Chinese dimensional aspect-based sentiment analysis
Lung-Hao Lee | Liang-Chih Yu | Suge Wang | Jian Liao
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)

This paper describes the SIGHAN-2024 shared task for Chinese dimensional aspect-based sentiment analysis (ABSA), including task description, data preparation, performance metrics, and evaluation results. Compared to representing affective states as several discrete classes (i.e., sentiment polarity), the dimensional approach represents affective states as continuous numerical values (called sentiment intensity) in the valence-arousal space, providing more fine-grained affective states. Therefore, we organized a dimensional ABSA (shorted dimABSA) shared task, comprising three subtasks: 1) intensity prediction, 2) triplet extraction, and 3) quadruple extraction, receiving a total of 214 submissions from 61 registered participants during evaluation phase. A total of eleven teams provided selected submissions for each subtask and seven teams submitted technical reports for the subtasks. This shared task demonstrates current NLP techniques for dealing with Chinese dimensional ABSA. All data sets with gold standards and evaluation scripts used in this shared task are publicly available for future research.

2023

pdf
Domain Generalization via Switch Knowledge Distillation for Robust Review Representation
You Zhang | Jin Wang | Liang-Chih Yu | Dan Xu | Xuejie Zhang
Findings of the Association for Computational Linguistics: ACL 2023

Applying neural models injected with in-domain user and product information to learn review representations of unseen or anonymous users incurs an obvious obstacle in content-based recommender systems. For the generalization of the in-domain classifier, most existing models train an extra plain-text model for the unseen domain. Without incorporating historical user and product information, such a schema makes unseen and anonymous users dissociate from the recommender system. To simultaneously learn the review representation of both existing and unseen users, this study proposed a switch knowledge distillation for domain generalization. A generalization-switch (GSwitch) model was initially applied to inject user and product information by flexibly encoding both domain-invariant and domain-specific features. By turning the status ON or OFF, the model introduced a switch knowledge distillation to learn a robust review representation that performed well for either existing or anonymous unseen users. The empirical experiments were conducted on IMDB, Yelp-2013, and Yelp-2014 by masking out users in test data as unseen and anonymous users. The comparative results indicate that the proposed method enhances the generalization capability of several existing baseline models. For reproducibility, the code for this paper is available at: https://github.com/yoyo-yun/DG_RRR.

2022

pdf
Dual-Encoder Transformers with Cross-modal Alignment for Multimodal Aspect-based Sentiment Analysis
Zhewen Yu | Jin Wang | Liang-Chih Yu | Xuejie Zhang
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Multimodal aspect-based sentiment analysis (MABSA) aims to extract the aspect terms from text and image pairs, and then analyze their corresponding sentiment. Recent studies typically use either a pipeline method or a unified transformer based on a cross-attention mechanism. However, these methods fail to explicitly and effectively incorporate the alignment between text and image. Supervised finetuning of the universal transformers for MABSA still requires a certain number of aligned image-text pairs. This study proposes a dual-encoder transformer with cross-modal alignment (DTCA). Two auxiliary tasks, including text-only extraction and text-patch alignment are introduced to enhance cross-attention performance. To align text and image, we propose an unsupervised approach which minimizes the Wasserstein distance between both modalities, forcing both encoders to produce more appropriate representations for the final extraction. Experimental results on two benchmarks demonstrate that DTCA consistently outperforms existing methods.

pdf
Accelerating Inference for Pretrained Language Models by Unified Multi-Perspective Early Exiting
Jun Kong | Jin Wang | Liang-Chih Yu | Xuejie Zhang
Proceedings of the 29th International Conference on Computational Linguistics

Conditional computation algorithms, such as the early exiting (EE) algorithm, can be applied to accelerate the inference of pretrained language models (PLMs) while maintaining competitive performance on resource-constrained devices. However, this approach is only applied to the vertical architecture to decide which layers should be used for inference. Conversely, the operation of the horizontal perspective is ignored, and the determination of which tokens in each layer should participate in the computation fails, leading to a high redundancy for adaptive inference. To address this limitation, a unified horizontal and vertical multi-perspective early exiting (MPEE) framework is proposed in this study to accelerate the inference of transformer-based models. Specifically, the vertical architecture uses recycling EE classifier memory and weighted self-distillation to enhance the performance of the EE classifiers. Then, the horizontal perspective uses recycling class attention memory to emphasize the informative tokens. Conversely, the tokens with less information are truncated by weighted fusion and isolated from the following computation. Based on this, both horizontal and vertical EE are unified to obtain a better tradeoff between performance and efficiency. Extensive experimental results show that MPEE can achieve higher acceleration inference with competent performance than existing competitive methods.

pdf
Knowledge Distillation with Reptile Meta-Learning for Pretrained Language Model Compression
Xinge Ma | Jin Wang | Liang-Chih Yu | Xuejie Zhang
Proceedings of the 29th International Conference on Computational Linguistics

The billions, and sometimes even trillions, of parameters involved in pre-trained language models significantly hamper their deployment in resource-constrained devices and real-time applications. Knowledge distillation (KD) can transfer knowledge from the original model (i.e., teacher) into a compact model (i.e., student) to achieve model compression. However, previous KD methods have usually frozen the teacher and applied its immutable output feature maps as soft labels to guide the student’s training. Moreover, the goal of the teacher is to achieve the best performance on downstream tasks rather than knowledge transfer. Such a fixed architecture may limit the teacher’s teaching and student’s learning abilities. Herein, a knowledge distillation method with reptile meta-learning is proposed to facilitate the transfer of knowledge from the teacher to the student. The teacher can continuously meta-learn the student’s learning objective to adjust its parameters for maximizing the student’s performance throughout the distillation process. In this way, the teacher learns to teach, produces more suitable soft labels, and transfers more appropriate knowledge to the student, resulting in improved performance. Unlike previous KD using meta-learning, the proposed method only needs to calculate the first-order derivatives to update the teacher, leading to lower computational cost but better convergence. Extensive experiments on the GLUE benchmark show the competitive performance achieved by the proposed method. For reproducibility, the code for this paper is available at: https://github.com/maxinge8698/ReptileDistil.

pdf
Overview of the ROCLING 2022 Shared Task for Chinese Healthcare Named Entity Recognition
Lung-Hao Lee | Chao-Yi Chen | Liang-Chih Yu | Yuen-Hsien Tseng
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)

This paper describes the ROCLING-2022 shared task for Chinese healthcare named entity recognition, including task description, data preparation, performance metrics, and evaluation results. Among ten registered teams, seven participating teams submitted a total of 20 runs. This shared task reveals present NLP techniques for dealing with Chinese named entity recognition in the healthcare domain. All data sets with gold standards and evaluation scripts used in this shared task are publicly available for future research.

2021

pdf
ROCLING-2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts
Liang-Chih Yu | Jin Wang | Bo Peng | Chu-Ren Huang
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

This paper presents the ROCLING 2021 shared task on dimensional sentiment analysis for educational texts which seeks to identify a real-value sentiment score of self-evaluation comments written by Chinese students in the both valence and arousal dimensions. Valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and arousal represents the degree of excitement and calm. Of the 7 teams registered for this shared task for two-dimensional sentiment analysis, 6 submitted results. We expected that this evaluation campaign could produce more advanced dimensional sentiment analysis techniques for the educational domain. All data sets with gold standards and scoring script are made publicly available to researchers.

pdf
MA-BERT: Learning Representation by Incorporating Multi-Attribute Knowledge in Transformers
You Zhang | Jin Wang | Liang-Chih Yu | Xuejie Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
An Adaptive Method for Building a Chinese Dimensional Sentiment Lexicon
Ying-Lung Lin | Liang-Chih Yu
Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)

pdf
Sentiment Analysis for Investment Atmosphere Scoring
Chih-Hsiang Peng | Liang-Chih Yu
Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)

pdf
Scientific Writing Evaluation Using Ensemble Multi-channel Neural Networks
Yuh-Shyang Wang | Lung-Hao Lee | Bo-Lin Lin | Liang-Chih Yu
Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)

pdf
Graph Attention Network with Memory Fusion for Aspect-level Sentiment Analysis
Li Yuan | Jin Wang | Liang-Chih Yu | Xuejie Zhang
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Aspect-level sentiment analysis(ASC) predicts each specific aspect term’s sentiment polarity in a given text or review. Recent studies used attention-based methods that can effectively improve the performance of aspect-level sentiment analysis. These methods ignored the syntactic relationship between the aspect and its corresponding context words, leading the model to focus on syntactically unrelated words mistakenly. One proposed solution, the graph convolutional network (GCN), cannot completely avoid the problem. While it does incorporate useful information about syntax, it assigns equal weight to all the edges between connected words. It may still incorrectly associate unrelated words to the target aspect through the iterations of graph convolutional propagation. In this study, a graph attention network with memory fusion is proposed to extend GCN’s idea by assigning different weights to edges. Syntactic constraints can be imposed to block the graph convolutional propagation of unrelated words. A convolutional layer and a memory fusion were applied to learn and exploit multiword relations and draw different weights of words to improve performance further. Experimental results on five datasets show that the proposed method yields better performance than existing methods.

2019

pdf
Investigating Dynamic Routing in Tree-Structured LSTM for Sentiment Analysis
Jin Wang | Liang-Chih Yu | K. Robert Lai | Xuejie Zhang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Deep neural network models such as long short-term memory (LSTM) and tree-LSTM have been proven to be effective for sentiment analysis. However, sequential LSTM is a bias model wherein the words in the tail of a sentence are more heavily emphasized than those in the header for building sentence representations. Even tree-LSTM, with useful structural information, could not avoid the bias problem because the root node will be dominant and the nodes in the bottom of the parse tree will be less emphasized even though they may contain salient information. To overcome the bias problem, this study proposes a capsule tree-LSTM model, introducing a dynamic routing algorithm as an aggregation layer to build sentence representation by assigning different weights to nodes according to their contributions to prediction. Experiments on Stanford Sentiment Treebank (SST) for sentiment classification and EmoBank for regression show that the proposed method improved the performance of tree-LSTM and other neural network models. In addition, the deeper the tree structure, the bigger the improvement.

2017

pdf
YZU-NLP at EmoInt-2017: Determining Emotion Intensity Using a Bi-directional LSTM-CNN Model
Yuanye He | Liang-Chih Yu | K. Robert Lai | Weiyi Liu
Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis

The EmoInt-2017 task aims to determine a continuous numerical value representing the intensity to which an emotion is expressed in a tweet. Compared to classification tasks that identify 1 among n emotions for a tweet, the present task can provide more fine-grained (real-valued) sentiment analysis. This paper presents a system that uses a bi-directional LSTM-CNN model to complete the competition task. Combining bi-directional LSTM and CNN, the prediction process considers both global information in a tweet and local important information. The proposed method ranked sixth among twenty-one teams in terms of Pearson Correlation Coefficient.

pdf bib
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA 2017)
Yuen-Hsien Tseng | Hsin-Hsi Chen | Lung-Hao Lee | Liang-Chih Yu
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA 2017)

pdf
Refining Word Embeddings for Sentiment Analysis
Liang-Chih Yu | Jin Wang | K. Robert Lai | Xuejie Zhang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning context-based word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).

pdf
應用詞向量於語言樣式探勘之研究 (Mining Language Patterns Using Word Embeddings) [In Chinese]
Xiang Xiao | Shao-Zhen Ye | Liang-Chih Yu | K. Robert Lai
Proceedings of the 29th Conference on Computational Linguistics and Speech Processing (ROCLING 2017)

pdf bib
IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases
Liang-Chih Yu | Lung-Hao Lee | Jin Wang | Kam-Fai Wong
Proceedings of the IJCNLP 2017, Shared Tasks

This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value sentiment score of Chinese single words and multi-word phrases in the both valence and arousal dimensions. Valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and arousal represents the degree of excitement and calm. Of the 19 teams registered for this shared task for two-dimensional sentiment analysis, 13 submitted results. We expected that this evaluation campaign could produce more advanced dimensional sentiment analysis techniques, especially for Chinese affective computing. All data sets with gold standards and scoring script are made publicly available to researchers.

pdf
SentiNLP at IJCNLP-2017 Task 4: Customer Feedback Analysis Using a Bi-LSTM-CNN Model
Shuying Lin | Huosheng Xie | Liang-Chih Yu | K. Robert Lai
Proceedings of the IJCNLP 2017, Shared Tasks

The analysis of customer feedback is useful to provide good customer service. There are a lot of online customer feedback are produced. Manual classification is impractical because the high volume of data. Therefore, the automatic classification of the customer feedback is of importance for the analysis system to identify meanings or intentions that the customer express. The aim of shared Task 4 of IJCNLP 2017 is to classify the customer feedback into six tags categorization. In this paper, we present a system that uses word embeddings to express the feature of the sentence in the corpus and the neural network as the classifier to complete the shared task. And then the ensemble method is used to get final predictive result. The proposed method get ranked first among twelve teams in terms of micro-averaged F1 and second for accura-cy metric.

2016

pdf
The NTNU-YZU System in the AESW Shared Task: Automated Evaluation of Scientific Writing Using a Convolutional Neural Network
Lung-Hao Lee | Bo-Lin Lin | Liang-Chih Yu | Yuen-Hsien Tseng
Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications

pdf
Overview of NLP-TEA 2016 Shared Task for Chinese Grammatical Error Diagnosis
Lung-Hao Lee | Gaoqi Rao | Liang-Chih Yu | Endong Xun | Baolin Zhang | Li-Ping Chang
Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016)

This paper presents the NLP-TEA 2016 shared task for Chinese grammatical error diagnosis which seeks to identify grammatical error types and their range of occurrence within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 15 teams registered for this shared task, 9 teams developed the system and submitted a total of 36 runs. We expected this evaluation campaign could lead to the development of more advanced NLP techniques for educational applications, especially for Chinese error detection. All data sets with gold standards and scoring scripts are made publicly available to researchers.

pdf
Dimensional Sentiment Analysis Using a Regional CNN-LSTM Model
Jin Wang | Liang-Chih Yu | K. Robert Lai | Xuejie Zhang
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
YZU-NLP Team at SemEval-2016 Task 4: Ordinal Sentiment Classification Using a Recurrent Convolutional Network
Yunchao He | Liang-Chih Yu | Chin-Sheng Yang | K. Robert Lai | Weiyi Liu
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
Building Chinese Affective Resources in Valence-Arousal Dimensions
Liang-Chih Yu | Lung-Hao Lee | Shuai Hao | Jin Wang | Yunchao He | Jun Hu | K. Robert Lai | Xuejie Zhang
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2015

pdf bib
Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing
Liang-Chih Yu | Zhifang Sui | Yue Zhang | Vincent Ng
Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing

pdf bib
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
Lung-Hao Lee | Liang-Chih Yu | Li-Ping Chang
Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications

pdf bib
International Journal of Computational Linguistics & Chinese Language Processing, Volume 20, Number 1, June 2015-Special Issue on Chinese as a Foreign Language
Lung-Hao Lee | Liang-Chih Yu | Li-Ping Chang
International Journal of Computational Linguistics & Chinese Language Processing, Volume 20, Number 1, June 2015-Special Issue on Chinese as a Foreign Language

pdf bib
Guest Editoral: Special Issue on Chinese as a Foreign Language
Lung-Hao Lee | Liang-Chih Yu | Li-Ping Chang
International Journal of Computational Linguistics & Chinese Language Processing, Volume 20, Number 1, June 2015-Special Issue on Chinese as a Foreign Language

pdf
Predicting Valence-Arousal Ratings of Words Using a Weighted Graph Method
Liang-Chih Yu | Jin Wang | K. Robert Lai | Xue-jie Zhang
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf
Identifying Emotion Labels from Psychiatric Social Texts Using Independent Component Analysis
Liang-Chih Yu | Chun-Yuan Ho
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf
A Sentence Judgment System for Grammatical Error Detection
Lung-Hao Lee | Liang-Chih Yu | Kuei-Ching Lee | Yuen-Hsien Tseng | Li-Ping Chang | Hsin-Hsi Chen
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations

pdf
Overview of SIGHAN 2014 Bake-off for Chinese Spelling Check
Liang-Chih Yu | Lung-Hao Lee | Yuen-Hsien Tseng | Hsin-Hsi Chen
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing

2013

pdf bib
Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing
Liang-Chih Yu | Yuen-Hsien Tseng | Jingbo Zhu | Fuji Ren
Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing

pdf
Candidate Scoring Using Web-Based Measure for Chinese Spelling Error Correction
Liang-Chih Yu | Chao-Hong Liu | Chung-Hsien Wu
Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing

2012

pdf
A Language Modeling Approach to Identifying Code-Switched Sentences and Words
Liang-Chih Yu | Wei-Cheng He | Wei-Nan Chien
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf
Traditional Chinese Parsing Evaluation at SIGHAN Bake-offs 2012
Yuen-Hsien Tseng | Lung-Hao Lee | Liang-Chih Yu
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf bib
Proceedings of the 24th Conference on Computational Linguistics and Speech Processing (ROCLING 2012)
Richard Tzong-Han Tsai | Liang-Chih Yu
Proceedings of the 24th Conference on Computational Linguistics and Speech Processing (ROCLING 2012)

pdf
應用跳脫語言模型於同義詞取代之研究 (Skip N-gram Modeling for Near-Synonym Choice) [In Chinese]
Shih-Ting Chen | Wei-Cheng He | Philips Kokoh Prasetyo | Liang-Chih Yu
Proceedings of the 24th Conference on Computational Linguistics and Speech Processing (ROCLING 2012)

bib
International Journal of Computational Linguistics & Chinese Language Processing, Volume 17, Number 2, June 2012—Special Issue on Selected Papers from ROCLING XXIII
Liang-Chih Yu | Wei-Ho Tsai
International Journal of Computational Linguistics & Chinese Language Processing, Volume 17, Number 2, June 2012—Special Issue on Selected Papers from ROCLING XXIII

bib
International Journal of Computational Linguistics & Chinese Language Processing, Volume 17, Number 4, December 2012-Special Issue on Selected Papers from ROCLING XXIV
Liang-Chih Yu | Richard Tzong-Han Tsai | Chia-Ping Chen | Cheng-Zen Yang | Shu-Kai Hsieh
International Journal of Computational Linguistics & Chinese Language Processing, Volume 17, Number 4, December 2012-Special Issue on Selected Papers from ROCLING XXIV

pdf
Developing and Evaluating a Computer-Assisted Near-Synonym Learning System
Liang-Chih Yu | Kai-Hsiang Hsu
Proceedings of COLING 2012: Demonstration Papers

2011

pdf
A Baseline System for Chinese Near-Synonym Choice
Liang-Chih Yu | Wei-Nan Chien | Shih-Ting Chen
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Proceedings of the 23rd Conference on Computational Linguistics and Speech Processing (ROCLING 2011)
Wei-Ho Tsai | Liang-Chih Yu
Proceedings of the 23rd Conference on Computational Linguistics and Speech Processing (ROCLING 2011)

pdf
多語語碼轉換之未知詞擷取 (Unknown Word Extraction from Multilingual Code-Switching Sentences) [In Chinese]
Yi-Lun Wu | Chaio-Wen Hsieh | Wei-Hsuan Lin | Chun-Yi Liu | Liang-Chih Yu
ROCLING 2011 Poster Papers

2010

pdf bib
Word Sense Disambiguation Using Multiple Contextual Features
Liang-Chih Yu | Chung-Hsien Wu | Jui-Feng Yeh
International Journal of Computational Linguistics & Chinese Language Processing, Volume 15, Number 3-4, September/December 2010

pdf
Discriminative Training for Near-Synonym Substitution
Liang-Chih Yu | Hsiu-Min Shih | Yu-Ling Lai | Jui-Feng Yeh | Chung-Hsien Wu
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2009

pdf
Mining Association Language Patterns for Negative Life Event Classification
Liang-Chih Yu | Chien-Lung Chan | Chung-Hsien Wu | Chao-Cheng Lin
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Corpus Cleanup of Mistaken Agreement Using Word Sense Disambiguation
Liang-Chih Yu | Chung-Hsien Wu | Jui-Feng Yeh | Eduard Hovy
International Journal of Computational Linguistics & Chinese Language Processing, Volume 13, Number 4, December 2008

pdf
OntoNotes: Corpus Cleanup of Mistaken Agreement Using Word Sense Disambiguation
Liang-Chih Yu | Chung-Hsien Wu | Eduard Hovy
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf
Topic Analysis for Psychiatric Document Retrieval
Liang-Chih Yu | Chung-Hsien Wu | Chin-Yew Lin | Eduard Hovy | Chia-Ling Lin
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf
HAL-Based Cascaded Model for Variable-Length Semantic Pattern Induction from Psychiatry Web Resources
Liang-Chih Yu | Chung-Hsien Wu | Fong-Lin Jang
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2005

pdf
Automated Alignment and Extraction of a Bilingual Ontology for Cross-Language Domain-Specific Applications
Jui-Feng Yeh | Chung-Hsien Wu | Ming-Jun Chen | Liang-Chih Yu
International Journal of Computational Linguistics & Chinese Language Processing, Volume 10, Number 1, March 2005

2004

pdf
Automated Alignment and Extraction of Bilingual Domain Ontology for Medical Domain Web Search
Jui-Feng Yeh | Chung-Hsien Wu | Ming-Jun Chen | Liang-chih Yu
Proceedings of the Third SIGHAN Workshop on Chinese Language Processing

pdf
Automated Alignment and Extraction of Bilingual Domain Ontology for Cross-Language Domain-Specific Applications
Jui-Feng Yeh | Chung-Hsien Wu | Ming-Jun Chen | Liang-Chih Yu
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics