Han Yang


2023

pdf
Baby’s CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models
Zheyu Zhang | Han Yang | Bolei Ma | David Rügamer | Ercong Nie
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning

2021

pdf
TenTrans Multilingual Low-Resource Translation System for WMT21 Indo-European Languages Task
Han Yang | Bojie Hu | Wanying Xie | Ambyera Han | Pan Liu | Jinan Xu | Qi Ju
Proceedings of the Sixth Conference on Machine Translation

This paper describes TenTrans’ submission to WMT21 Multilingual Low-Resource Translation shared task for the Romance language pairs. This task focuses on improving translation quality from Catalan to Occitan, Romanian and Italian, with the assistance of related high-resource languages. We mainly utilize back-translation, pivot-based methods, multilingual models, pre-trained model fine-tuning, and in-domain knowledge transfer to improve the translation quality. On the test set, our best-submitted system achieves an average of 43.45 case-sensitive BLEU scores across all low-resource pairs. Our data, code, and pre-trained models used in this work are available in TenTrans evaluation examples.

pdf
TenTrans Large-Scale Multilingual Machine Translation System for WMT21
Wanying Xie | Bojie Hu | Han Yang | Dong Yu | Qi Ju
Proceedings of the Sixth Conference on Machine Translation

This paper describes TenTrans large-scale multilingual machine translation system for WMT 2021. We participate in the Small Track 2 in five South East Asian languages, thirty directions: Javanese, Indonesian, Malay, Tagalog, Tamil, English. We mainly utilized forward/back-translation, in-domain data selection, knowledge distillation, and gradual fine-tuning from the pre-trained model FLORES-101. We find that forward/back-translation significantly improves the translation results, data selection and gradual fine-tuning are particularly effective during adapting domain, while knowledge distillation brings slight performance improvement. Also, model averaging is used to further improve the translation performance based on these systems. Our final system achieves an average BLEU score of 28.89 across thirty directions on the test set.

2017

pdf
Character-level Intra Attention Network for Natural Language Inference
Han Yang | Marta R. Costa-jussà | José A. R. Fonollosa
Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP

Natural language inference (NLI) is a central problem in language understanding. End-to-end artificial neural networks have reached state-of-the-art performance in NLI field recently. In this paper, we propose Character-level Intra Attention Network (CIAN) for the NLI task. In our model, we use the character-level convolutional network to replace the standard word embedding layer, and we use the intra attention to capture the intra-sentence semantics. The proposed CIAN model provides improved results based on a newly published MNLI corpus.