Jingshu Liu


2021

pdf bib
Encouraging Neural Machine Translation to Satisfy Terminology Constraints
Melissa Ailem | Jingshu Liu | Raheel Qader
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Lingua Custodia’s Participation at the WMT 2021 Machine Translation Using Terminologies Shared Task
Melissa Ailem | Jingshu Liu | Raheel Qader
Proceedings of the Sixth Conference on Machine Translation

This paper describes Lingua Custodia’s submission to the WMT21 shared task on machine translation using terminologies. We consider three directions, namely English to French, Russian, and Chinese. We rely on a Transformer-based architecture as a building block, and we explore a method which introduces two main changes to the standard procedure to handle terminologies. The first one consists in augmenting the training data in such a way as to encourage the model to learn a copy behavior when it encounters terminology constraint terms. The second change is constraint token masking, whose purpose is to ease copy behavior learning and to improve model generalization. Empirical results show that our method satisfies most terminology constraints while maintaining high translation quality.

2020

pdf bib
BERT-XML: Large Scale Automated ICD Coding Using BERT Pretraining
Zachariah Zhang | Jingshu Liu | Narges Razavian
Proceedings of the 3rd Clinical Natural Language Processing Workshop

ICD coding is the task of classifying and cod-ing all diagnoses, symptoms and proceduresassociated with a patient’s visit. The process isoften manual, extremely time-consuming andexpensive for hospitals as clinical interactionsare usually recorded in free text medical notes.In this paper, we propose a machine learningmodel, BERT-XML, for large scale automatedICD coding of EHR notes, utilizing recentlydeveloped unsupervised pretraining that haveachieved state of the art performance on a va-riety of NLP tasks. We train a BERT modelfrom scratch on EHR notes, learning with vo-cabulary better suited for EHR tasks and thusoutperform off-the-shelf models. We furtheradapt the BERT architecture for ICD codingwith multi-label attention. We demonstratethe effectiveness of BERT-based models on thelarge scale ICD code classification task usingmillions of EHR notes to predict thousands ofunique codes.

2018

pdf bib
Alignement de termes de longueur variable en corpus comparables spécialisés (Alignment of variable length terms in specialized comparable corpora)
Jingshu Liu | Emmanuel Morin | Sebastián Peña Saldarriaga
Actes de la Conférence TALN. Volume 1 - Articles longs, articles courts de TALN

Nous proposons dans cet article une adaptation de l’approche compositionnelle étendue capable d’aligner des termes de longueurs variables à partir de corpus comparables, en modifiant la représentation des termes complexes. Nous proposons également de nouveaux modes de pondération pour l’approche standard qui améliorent les résultats des approches état de l’art pour les termes simples et complexes en domaine de spécialité.

pdf bib
Towards a unified framework for bilingual terminology extraction of single-word and multi-word terms
Jingshu Liu | Emmanuel Morin | Peña Saldarriaga
Proceedings of the 27th International Conference on Computational Linguistics

Extracting a bilingual terminology for multi-word terms from comparable corpora has not been widely researched. In this work we propose a unified framework for aligning bilingual terms independently of the term lengths. We also introduce some enhancements to the context-based and the neural network based approaches. Our experiments show the effectiveness of our enhancements of previous works and the system can be adapted in specialized domains.