Jingshu Liu


2022

pdf
Encouraging Neural Machine Translation to Satisfy Terminology Constraints.
Melissa Ailem | Jingshu Liu | Raheel Qader
Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale

Encouraging Neural Machine Translation to Satisfy Terminology Constraints. We present a new approach to encourage neural machine translation to satisfy lexical constraints. Our method acts at the training step and thereby avoiding the introduction of any extra computational overhead at inference step. The proposed method combines three main ingredients. The first one consists in augmenting the training data to specify the constraints. Intuitively, this encourages the model to learn a copy behavior when it encounters constraint terms. Compared to previous work, we use a simplified augmentation strategy without source factors. The second ingredient is constraint token masking, which makes it even easier for the model to learn the copy behavior and generalize better. The third one, is a modification of the standard cross entropy loss to bias the model towards assigning high probabilities to constraint words. Empirical results show that our method improves upon related baselines in terms of both BLEU score and the percentage of generated constraint terms.

pdf
Lingua Custodia’s Participation at the WMT 2022 Word-Level Auto-completion Shared Task
Melissa Ailem | Jingshu Liu | Jean-gabriel Barthelemy | Raheel Qader
Proceedings of the Seventh Conference on Machine Translation (WMT)

This paper presents Lingua Custodia’s submission to the WMT22 shared task on Word Level Auto-completion (WLAC). We consider two directions, namely German-English and English-German.The WLAC task in Neural Machine Translation (NMT) consists in predicting a target word given few human typed characters, the source sentence to translate, as well as some translation context. Inspired by recent work in terminology control, we propose to treat the human typed sequence as a constraint to predict the right word starting by the latter. To do so, the source side of the training data is augmented with both the constraints and the translation context. In addition, following new advances in WLAC, we use a joint optimization strategy taking into account several types of translation context. The automatic as well as human accuracy obtained with the submitted systems show the effectiveness of the proposed method.

2021

pdf
Encouraging Neural Machine Translation to Satisfy Terminology Constraints
Melissa Ailem | Jingshu Liu | Raheel Qader
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Lingua Custodia’s Participation at the WMT 2021 Machine Translation Using Terminologies Shared Task
Melissa Ailem | Jingshu Liu | Raheel Qader
Proceedings of the Sixth Conference on Machine Translation

This paper describes Lingua Custodia’s submission to the WMT21 shared task on machine translation using terminologies. We consider three directions, namely English to French, Russian, and Chinese. We rely on a Transformer-based architecture as a building block, and we explore a method which introduces two main changes to the standard procedure to handle terminologies. The first one consists in augmenting the training data in such a way as to encourage the model to learn a copy behavior when it encounters terminology constraint terms. The second change is constraint token masking, whose purpose is to ease copy behavior learning and to improve model generalization. Empirical results show that our method satisfies most terminology constraints while maintaining high translation quality.

2020

pdf
BERT-XML: Large Scale Automated ICD Coding Using BERT Pretraining
Zachariah Zhang | Jingshu Liu | Narges Razavian
Proceedings of the 3rd Clinical Natural Language Processing Workshop

ICD coding is the task of classifying and cod-ing all diagnoses, symptoms and proceduresassociated with a patient’s visit. The process isoften manual, extremely time-consuming andexpensive for hospitals as clinical interactionsare usually recorded in free text medical notes.In this paper, we propose a machine learningmodel, BERT-XML, for large scale automatedICD coding of EHR notes, utilizing recentlydeveloped unsupervised pretraining that haveachieved state of the art performance on a va-riety of NLP tasks. We train a BERT modelfrom scratch on EHR notes, learning with vo-cabulary better suited for EHR tasks and thusoutperform off-the-shelf models. We furtheradapt the BERT architecture for ICD codingwith multi-label attention. We demonstratethe effectiveness of BERT-based models on thelarge scale ICD code classification task usingmillions of EHR notes to predict thousands ofunique codes.

2018

pdf
Towards a unified framework for bilingual terminology extraction of single-word and multi-word terms
Jingshu Liu | Emmanuel Morin | Peña Saldarriaga
Proceedings of the 27th International Conference on Computational Linguistics

Extracting a bilingual terminology for multi-word terms from comparable corpora has not been widely researched. In this work we propose a unified framework for aligning bilingual terms independently of the term lengths. We also introduce some enhancements to the context-based and the neural network based approaches. Our experiments show the effectiveness of our enhancements of previous works and the system can be adapted in specialized domains.

pdf bib
Alignement de termes de longueur variable en corpus comparables spécialisés (Alignment of variable length terms in specialized comparable corpora)
Jingshu Liu | Emmanuel Morin | Sebastián Peña Saldarriaga
Actes de la Conférence TALN. Volume 1 - Articles longs, articles courts de TALN

Nous proposons dans cet article une adaptation de l’approche compositionnelle étendue capable d’aligner des termes de longueurs variables à partir de corpus comparables, en modifiant la représentation des termes complexes. Nous proposons également de nouveaux modes de pondération pour l’approche standard qui améliorent les résultats des approches état de l’art pour les termes simples et complexes en domaine de spécialité.