uniblock: Scoring and Filtering Corpus with Unicode Block Information

Yingbo Gao, Weiyue Wang, Hermann Ney


Abstract
The preprocessing pipelines in Natural Language Processing usually involve a step of removing sentences consisted of illegal characters. The definition of illegal characters and the specific removal strategy depend on the task, language, domain, etc, which often lead to tiresome and repetitive scripting of rules. In this paper, we introduce a simple statistical method, uniblock, to overcome this problem. For each sentence, uniblock generates a fixed-size feature vector using Unicode block information of the characters. A Gaussian mixture model is then estimated on some clean corpus using variational inference. The learned model can then be used to score sentences and filter corpus. We present experimental results on Sentiment Analysis, Language Modeling and Machine Translation, and show the simplicity and effectiveness of our method.
Anthology ID:
D19-1133
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1324–1329
Language:
URL:
https://aclanthology.org/D19-1133
DOI:
10.18653/v1/D19-1133
Bibkey:
Cite (ACL):
Yingbo Gao, Weiyue Wang, and Hermann Ney. 2019. uniblock: Scoring and Filtering Corpus with Unicode Block Information. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1324–1329, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
uniblock: Scoring and Filtering Corpus with Unicode Block Information (Gao et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/D19-1133.pdf
Code
 ringoreality/uniblock