Johannes Huber
2020
ThaiLMCut: Unsupervised Pretraining for Thai Word Segmentation
Suteera Seeha
|
Ivan Bilan
|
Liliana Mamani Sanchez
|
Johannes Huber
|
Michael Matuschek
|
Hinrich Schütze
Proceedings of the Twelfth Language Resources and Evaluation Conference
We propose ThaiLMCut, a semi-supervised approach for Thai word segmentation which utilizes a bi-directional character language model (LM) as a way to leverage useful linguistic knowledge from unlabeled data. After the language model is trained on substantial unlabeled corpora, the weights of its embedding and recurrent layers are transferred to a supervised word segmentation model which continues fine-tuning them on a word segmentation task. Our experimental results demonstrate that applying the LM always leads to a performance gain, especially when the amount of labeled data is small. In such cases, the F1 Score increased by up to 2.02%. Even on abig labeled dataset, a small improvement gain can still be obtained. The approach has also shown to be very beneficial for out-of-domain settings with a gain in F1 Score of up to 3.13%. Finally, we show that ThaiLMCut can outperform other open source state-of-the-art models achieving an F1 Score of 98.78% on the standard benchmark, InterBEST2009.
2019
Learning multilingual topics through aspect extraction from monolingual texts
Johannes Huber
|
Myra Spiliopoulou
Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages
Search
Co-authors
- Suteera Seeha 1
- Ivan Bilan 1
- Liliana Mamani Sanchez 1
- Michael Matuschek 1
- Hinrich Schütze 1
- show all...