Feng Li
2022
Learn To Remember: Transformer with Recurrent Memory for Document-Level Machine Translation
Yukun Feng
|
Feng Li
|
Ziang Song
|
Boyuan Zheng
|
Philipp Koehn
Findings of the Association for Computational Linguistics: NAACL 2022
The Transformer architecture has led to significant gains in machine translation. However, most studies focus on only sentence-level translation without considering the context dependency within documents, leading to the inadequacy of document-level coherence. Some recent research tried to mitigate this issue by introducing an additional context encoder or translating with multiple sentences or even the entire document. Such methods may lose the information on the target side or have an increasing computational complexity as documents get longer. To address such problems, we introduce a recurrent memory unit to the vanilla Transformer, which supports the information exchange between the sentence and previous context. The memory unit is recurrently updated by acquiring information from sentences, and passing the aggregated knowledge back to subsequent sentence states. We follow a two-stage training strategy, in which the model is first trained at the sentence level and then finetuned for document-level translation. We conduct experiments on three popular datasets for document-level machine translation and our model has an average improvement of 0.91 s-BLEU over the sentence-level baseline. We also achieve state-of-the-art results on TED and News, outperforming the previous work by 0.36 s-BLEU and 1.49 d-BLEU on average.
A Token-pair Framework for Information Extraction from Dialog Transcripts in SereTOD Challenge
Chenyue Wang
|
Xiangxing Kong
|
Mengzuo Huang
|
Feng Li
|
Jian Xing
|
Weidong Zhang
|
Wuhe Zou
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
This paper describes our solution for Sere- TOD Challenge Track 1: Information extraction from dialog transcripts. We propose a token-pair framework to simultaneously identify entity and value mentions and link them into corresponding triples. As entity mentions are usually coreferent, we adopt a baseline model for coreference resolution. We exploit both annotated transcripts and unsupervised dialogs for training. With model ensemble and post-processing strategies, our system significantly outperforms the baseline solution and ranks first in triple f1 and third in entity f1.
Toward the Limitation of Code-Switching in Cross-Lingual Transfer
Yukun Feng
|
Feng Li
|
Philipp Koehn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multilingual pretrained models have shown strong cross-lingual transfer ability. Some works used code-switching sentences, which consist of tokens from multiple languages, to enhance the cross-lingual representation further, and have shown success in many zero-shot cross-lingual tasks. However, code-switched tokens are likely to cause grammatical incoherence in newly substituted sentences, and negatively affect the performance on token-sensitive tasks, such as Part-of-Speech (POS) tagging and Named-Entity-Recognition (NER). This paper mitigates the limitation of the code-switching method by not only making the token replacement but considering the similarity between the context and the switched tokens so that the newly substituted sentences are grammatically consistent during both training and inference. We conduct experiments on cross-lingual POS and NER over 30+ languages, and demonstrate the effectiveness of our method by outperforming the mBERT by 0.95 and original code-switching method by 1.67 on F1 scores.
Search
Co-authors
- Yukun Feng 2
- Philipp Koehn 2
- Ziang Song 1
- Boyuan Zheng 1
- Chenyue Wang 1
- show all...