Eunkyul Leah Jo
2024
An Untold Story of Preprocessing Task Evaluation: An Alignment-based Joint Evaluation Approach
Eunkyul Leah Jo
|
Angela Yoonseo Park
|
Grace Tianjiao Zhang
|
Izia Xiaoxiao Wang
|
Junrui Wang
|
MingJia Mao
|
Jungyeul Park
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
A preprocessing task such as tokenization and sentence boundary detection (SBD) has commonly been considered as NLP challenges that have already been solved. This perception is due to their generally good performance and the presence of pre-tokenized data. However, it’s important to note that the low error rates of current methods are mainly specific to certain tasks, and rule-based tokenization can be difficult to use across different systems. Despite being subtle, these limitations are significant in the context of the NLP pipeline. In this paper, we introduce a novel evaluation algorithm for the preprocessing task, including both tokenization and SBD results. This algorithm aims to enhance the reliability of evaluations by reevaluating the counts of true positive cases for F1 measures in both preprocessing tasks jointly. It achieves this through an alignment-based approach inspired by sentence and word alignments used in machine translation. Our evaluation algorithm not only allows for precise counting of true positive tokens and sentence boundaries but also combines these two evaluation tasks into a single organized pipeline. To illustrate and clarify the intricacies of this calculation and integration, we provide detailed pseudo-code configurations for implementation. Additionally, we offer empirical evidence demonstrating how sentence and word alignment can improve evaluation reliability and present case studies to further support our approach.
2022
Yet Another Format of Universal Dependencies for Korean
Yige Chen
|
Eunkyul Leah Jo
|
Yundong Yao
|
KyungTae Lim
|
Miikka Silfverberg
|
Francis M. Tyers
|
Jungyeul Park
Proceedings of the 29th International Conference on Computational Linguistics
In this study, we propose a morpheme-based scheme for Korean dependency parsing and adopt the proposed scheme to Universal Dependencies. We present the linguistic rationale that illustrates the motivation and the necessity of adopting the morpheme-based format, and develop scripts that convert between the original format used by Universal Dependencies and the proposed morpheme-based format automatically. The effectiveness of the proposed format for Korean dependency parsing is then testified by both statistical and neural models, including UDPipe and Stanza, with our carefully constructed morpheme-based word embedding for Korean. morphUD outperforms parsing results for all Korean UD treebanks, and we also present detailed error analysis.
Search
Co-authors
- Jungyeul Park 2
- Yige Chen 1
- Yundong Yao 1
- KyungTae Lim 1
- Miikka Silfverberg 1
- show all...