Shih-Hung Liu


2020

pdf bib
基於端對端模型化技術之語音文件摘要 (Spoken Document Summarization Using End-to-End Modeling Techniques)
Tzu-En Liu | Shih-Hung Liu | Kuo-Wei Chang | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 25, Number 1, June 2020

2017

pdf bib
當代非監督式方法之比較於節錄式語音摘要 (An Empirical Comparison of Contemporary Unsupervised Approaches for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Kai-Wun Shih | Berlin Chen | Hsin-Min Wang | Wen-Lian Hsu
International Journal of Computational Linguistics & Chinese Language Processing, Volume 22, Number 1, June 2017

2016

pdf
Learning to Distill: The Essence Vector Modeling Framework
Kuan-Yu Chen | Shih-Hung Liu | Berlin Chen | Hsin-Min Wang
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is only a dearth of research focusing on launching unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. We evaluate the proposed EV model on benchmark sentiment classification and multi-document summarization tasks. The experimental results demonstrate the effectiveness and applicability of the proposed embedding method. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition. The utility of the D-EV model is evaluated on a spoken document summarization task, confirming the effectiveness of the proposed embedding method in relation to several well-practiced and state-of-the-art summarization methods.

pdf
使用字典學習法於強健性語音辨識(The Use of Dictionary Learning Approach for Robustness Speech Recognition) [In Chinese]
Bi-Cheng Yan | Chin-Hong Shih | Shih-Hung Liu | Berlin Chen
Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016)

pdf
運用序列到序列生成架構於重寫式自動摘要(Exploiting Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese]
Yu-Lun Hsieh | Shih-Hung Liu | Kuan-Yu Chen | Hsin-Min Wang | Wen-Lian Hsu | Berlin Chen
Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016)

pdf
使用字典學習法於強健性語音辨識 (The Use of Dictionary Learning Approach for Robustness Speech Recognition) [In Chinese]
Bi-Cheng Yan | Chin-Hong Shih | Shih-Hung Liu | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 21, Number 2, December 2016

2015

pdf bib
表示法學習技術於節錄式語音文件摘要之研究(A Study on Representation Learning Techniques for Extractive Spoken Document Summarization) [In Chinese]
Kai-Wun Shih | Berlin Chen | Kuan-Yu Chen | Shih-Hung Liu | Hsin-Min Wang
Proceedings of the 27th Conference on Computational Linguistics and Speech Processing (ROCLING 2015)

pdf
節錄式語音文件摘要使用表示法學習技術 (Extractive Spoken Document Summarization with Representation Learning Techniques) [In Chinese]
Kai-Wun Shih | Kuan-Yu Chen | Shih-Hung Liu | Hsin-Min Wang | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 20, Number 2, December 2015 - Special Issue on Selected Papers from ROCLING XXVII

2014

pdf
Leveraging Effective Query Modeling Techniques for Speech Recognition and Summarization
Kuan-Yu Chen | Shih-Hung Liu | Berlin Chen | Ea-Ee Jan | Hsin-Min Wang | Wen-Lian Hsu | Hsin-Hsi Chen
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
探究新穎語句模型化技術於節錄式語音摘要 (Investigating Novel Sentence Modeling Techniques for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Yu-Lun Hsieh | Berlin Chen | Hsin-Min Wang | Wen-Lian Hsu
Proceedings of the 26th Conference on Computational Linguistics and Speech Processing (ROCLING 2014)

2013

pdf bib
改良語句模型技術於節錄式語音摘要之研究 (Improved Sentence Modeling Techniques for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Hsin-Min Wang | Wen-Lian Hsu | Berlin Chen
Proceedings of the 25th Conference on Computational Linguistics and Speech Processing (ROCLING 2013)

2012

pdf
Cost-benefit Analysis of Two-Stage Conditional Random Fields based English-to-Chinese Machine Transliteration
Chan-Hung Kuo | Shih-Hung Liu | Mike Tian-Jian Jiang | Cheng-Wei Lee | Wen-Lian Hsu
Proceedings of the 4th Named Entity Workshop (NEWS) 2012

2010

pdf
Term Contributed Boundary Feature using Conditional Random Fields for Chinese Word Segmentation Task
Tian-Jian Jiang | Shih-Hung Liu | Cheng-Lung Sung | Wen-Lian Hsu
Proceedings of the 22nd Conference on Computational Linguistics and Speech Processing (ROCLING 2010)

pdf
Term Contributed Boundary Tagging by Conditional Random Fields for SIGHAN 2010 Chinese Word Segmentation Bakeoff
Tian-Jian Jiang | Shih-Hung Liu | Cheng-Lung Sung | Wen-Lian Hsu
CIPS-SIGHAN Joint Conference on Chinese Language Processing

2008

pdf
Improved Minimum Phone Error based Discriminative Training of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition
Shih-Hung Liu | Fang-Hui Chu | Yueng-Tien Lo | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 13, Number 3, September 2008: Special Issue on Selected Papers from ROCLING XIX

2007

pdf
改善以最小化音素錯誤為基礎的鑑別式聲學模型訓練於中文連續語音辨識之研究 (Improved Minimum Phone Error based Discriminative Training of Acoustic Models for Chinese Continuous Speech Reconigtion) [In Chinese]
Shih-Hung Liu | Fang-Hui Chu | Berlin Chen
Proceedings of the 19th Conference on Computational Linguistics and Speech Processing

2006

pdf bib
An Empirical Study of Word Error Minimization Approaches for Mandarin Large Vocabulary Continuous Speech Recognition
Jen-Wei Kuo | Shih-Hung Liu | Hsin-Min Wang | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 11, Number 3, September 2006: Special Issue on Selected Papers from ROCLING XVII

2005

pdf
風險最小化準則在中文大詞彙連續語音辨識之研究 (Risk Minimization Criterion for Mandarin Large Vocabulary Continuous Speech Recognition) [In Chinese]
Jen-Wei Kuo | Shih-Hung Liu | Berlin Chen
Proceedings of the 17th Conference on Computational Linguistics and Speech Processing