Yuekun Yao


2022

pdf
Structural generalization is hard for sequence-to-sequence models
Yuekun Yao | Alexander Koller
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Sequence-to-sequence (seq2seq) models have been successful across many NLP tasks,including ones that require predicting linguistic structure. However, recent work on compositional generalization has shown that seq2seq models achieve very low accuracy in generalizing to linguistic structures that were not seen in training. We present new evidence that this is a general limitation of seq2seq models that is present not just in semantic parsing, but also in syntactic parsing and in text-to-text tasks, and that this limitation can often be overcome by neurosymbolic models that have linguistic knowledge built in. We further report on some experiments that give initial answers on the reasons for these limitations.

2020

pdf
Dynamic Masking for Improved Stability in Online Spoken Language Translation
Yuekun Yao | Barry Haddow
Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

pdf
ELITR Non-Native Speech Translation at IWSLT 2020
Dominik Macháček | Jonáš Kratochvíl | Sangeet Sagar | Matúš Žilinec | Ondřej Bojar | Thai-Son Nguyen | Felix Schneider | Philip Williams | Yuekun Yao
Proceedings of the 17th International Conference on Spoken Language Translation

This paper is an ELITR system submission for the non-native speech translation task at IWSLT 2020. We describe systems for offline ASR, real-time ASR, and our cascaded approach to offline SLT and real-time SLT. We select our primary candidates from a pool of pre-existing systems, develop a new end-to-end general ASR system, and a hybrid ASR trained on non-native speech. The provided small validation set prevents us from carrying out a complex validation, but we submit all the unselected candidates for contrastive evaluation on the test set.