Youngki Moon
2021
VUS at IWSLT 2021: A Finetuned Pipeline for Offline Speech Translation
Yong Rae Jo
|
Youngki Moon
|
Minji Jung
|
Jungyoon Choi
|
Jihyung Moon
|
Won Ik Cho
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)
In this technical report, we describe the fine-tuned ASR-MT pipeline used for the IWSLT shared task. We remove less useful speech samples by checking WER with an ASR model, and further train a wav2vec and Transformers-based ASR module based on the filtered data. In addition, we cleanse the errata that can interfere with the machine translation process and use it for Transformer-based MT module training. Finally, in the actual inference phase, we use a sentence boundary detection model trained with constrained data to properly merge fragment ASR outputs into full sentences. The merged sentences are post-processed using part of speech. The final result is yielded by the trained MT module. The performance using the dev set displays BLEU 20.37, and this model records the performance of BLEU 20.9 with the test set.
2020
Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives
Won Ik Cho
|
Youngki Moon
|
Sangwhan Moon
|
Seok Min Kim
|
Nam Soo Kim
Findings of the Association for Computational Linguistics: EMNLP 2020
Modern dialog managers face the challenge of having to fulfill human-level conversational skills as part of common user expectations, including but not limited to discourse with no clear objective. Along with these requirements, agents are expected to extrapolate intent from the user’s dialogue even when subjected to non-canonical forms of speech. This depends on the agent’s comprehension of paraphrased forms of such utterances. Especially in low-resource languages, the lack of data is a bottleneck that prevents advancements of the comprehension performance for these types of agents. In this regard, here we demonstrate the necessity of extracting the intent argument of non-canonical directives in a natural language format, which may yield more accurate parsing, and suggest guidelines for building a parallel corpus for this purpose. Following the guidelines, we construct a Korean corpus of 50K instances of question/command-intent pairs, including the labels for classification of the utterance type. We also propose a method for mitigating class imbalance, demonstrating the potential applications of the corpus generation method and its multilingual extensibility.
Search
Co-authors
- Won Ik Cho 2
- Sangwhan Moon 1
- Seok Min Kim 1
- Nam Soo Kim 1
- Yong Rae Jo 1
- show all...