Ehsan Zare Borzeshi


2019

pdf
ReWE: Regressing Word Embeddings for Regularization of Neural Machine Translation Systems
Inigo Jauregi Unanue | Ehsan Zare Borzeshi | Nazanin Esmaili | Massimo Piccardi
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Regularization of neural machine translation is still a significant problem, especially in low-resource settings. To mollify this problem, we propose regressing word embeddings (ReWE) as a new regularization technique in a system that is jointly trained to predict the next word in the translation (categorical value) and its word embedding (continuous value). Such a joint training allows the proposed system to learn the distributional properties represented by the word embeddings, empirically improving the generalization to unseen sentences. Experiments over three translation datasets have showed a consistent improvement over a strong baseline, ranging between 0.91 and 2.4 BLEU points, and also a marked improvement over a state-of-the-art system.

2018

pdf bib
A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems
Inigo Jauregi Unanue | Ehsan Zare Borzeshi | Massimo Piccardi
Proceedings of the 2nd Workshop on Neural Machine Translation and Generation

Automatic post-editing (APE) systems aim to correct the systematic errors made by machine translators. In this paper, we propose a neural APE system that encodes the source (src) and machine translated (mt) sentences with two separate encoders, but leverages a shared attention mechanism to better understand how the two inputs contribute to the generation of the post-edited (pe) sentences. Our empirical observations have showed that when the mt is incorrect, the attention shifts weight toward tokens in the src sentence to properly edit the incorrect translation. The model has been trained and evaluated on the official data from the WMT16 and WMT17 APE IT domain English-German shared tasks. Additionally, we have used the extra 500K artificial data provided by the shared task. Our system has been able to reproduce the accuracies of systems trained with the same data, while at the same time providing better interpretability.

pdf
English-Basque Statistical and Neural Machine Translation
Inigo Jauregi Unanue | Lierni Garmendia Arratibel | Ehsan Zare Borzeshi | Massimo Piccardi
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
BiLSTM-CRF for Persian Named-Entity Recognition ArmanPersoNERCorpus: the First Entity-Annotated Persian Dataset
Hanieh Poostchi | Ehsan Zare Borzeshi | Massimo Piccardi
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf
PersoNER: Persian Named-Entity Recognition
Hanieh Poostchi | Ehsan Zare Borzeshi | Mohammad Abdous | Massimo Piccardi
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network.

pdf bib
Bidirectional LSTM-CRF for Clinical Concept Extraction
Raghavendra Chalapathy | Ehsan Zare Borzeshi | Massimo Piccardi
Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP)

Automated extraction of concepts from patient clinical records is an essential facilitator of clinical research. For this reason, the 2010 i2b2/VA Natural Language Processing Challenges for Clinical Records introduced a concept extraction task aimed at identifying and classifying concepts into predefined categories (i.e., treatments, tests and problems). State-of-the-art concept extraction approaches heavily rely on handcrafted features and domain-specific resources which are hard to collect and define. For this reason, this paper proposes an alternative, streamlined approach: a recurrent neural network (the bidirectional LSTM with CRF decoding) initialized with general-purpose, off-the-shelf word embeddings. The experimental results achieved on the 2010 i2b2/VA reference corpora using the proposed framework outperform all recent methods and ranks closely to the best submission from the original 2010 i2b2/VA challenge.

pdf bib
An Investigation of Recurrent Neural Architectures for Drug Name Recognition
Raghavendra Chalapathy | Ehsan Zare Borzeshi | Massimo Piccardi
Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis