Hassan S. Shavarani

Also published as: Hassan Shavarani


2021

pdf
Translation-based Supervision for Policy Generation in Simultaneous Neural Machine Translation
Ashkan Alinejad | Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

In simultaneous machine translation, finding an agent with the optimal action sequence of reads and writes that maintain a high level of translation quality while minimizing the average lag in producing target tokens remains an extremely challenging problem. We propose a novel supervised learning approach for training an agent that can detect the minimum number of reads required for generating each target token by comparing simultaneous translations against full-sentence translations during training to generate oracle action sequences. These oracle sequences can then be used to train a supervised model for action generation at inference time. Our approach provides an alternative to current heuristic methods in simultaneous translation by introducing a new training objective, which is easier to train than previous attempts at training the agent using reinforcement learning techniques for this task. Our experimental results show that our novel training method for action generation produces much higher quality translations while minimizing the average lag in simultaneous translation.

pdf
Better Neural Machine Translation by Extracting Linguistic Information from BERT
Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Adding linguistic information (syntax or semantics) to neural machine translation (NMT) have mostly focused on using point estimates from pre-trained models. Directly using the capacity of massive pre-trained contextual word embedding models such as BERT(Devlin et al., 2019) has been marginally useful in NMT because effective fine-tuning is difficult to obtain for NMT without making training brittle and unreliable. We augment NMT by extracting dense fine-tuned vector-based linguistic information from BERT instead of using point estimates. Experimental results show that our method of incorporating linguistic information helps NMT to generalize better in a variety of training contexts and is no more difficult to train than conventional Transformer-based NMT.

2020

pdf
Multi-class Multilingual Classification of Wikipedia Articles Using Extended Named Entity Tag Set
Hassan S. Shavarani | Satoshi Sekine
Proceedings of the Twelfth Language Resources and Evaluation Conference

Wikipedia is a great source of general world knowledge which can guide NLP models better understand their motivation to make predictions. Structuring Wikipedia is the initial step towards this goal which can facilitate fine-grain classification of articles. In this work, we introduce the Shinra 5-Language Categorization Dataset (SHINRA-5LDS), a large multi-lingual and multi-labeled set of annotated Wikipedia articles in Japanese, English, French, German, and Farsi using Extended Named Entity (ENE) tag set. We evaluate the dataset using the best models provided for ENE label set classification and show that the currently available classification models struggle with large datasets using fine-grained tag sets.

2018

pdf
Top-down Tree Structured Decoding with Syntactic Connections for Neural Machine Translation and Parsing
Jetic Gū | Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

The addition of syntax-aware decoding in Neural Machine Translation (NMT) systems requires an effective tree-structured neural network, a syntax-aware attention model and a language generation model that is sensitive to sentence structure. Recent approaches resort to sequential decoding by adding additional neural network units to capture bottom-up structural information, or serialising structured data into sequence. We exploit a top-down tree-structured model called DRNN (Doubly-Recurrent Neural Networks) first proposed by Alvarez-Melis and Jaakola (2017) to create an NMT model called Seq2DRNN that combines a sequential encoder with tree-structured decoding augmented with a syntax-aware attention model. Unlike previous approaches to syntax-based NMT which use dependency parsing models our method uses constituency parsing which we argue provides useful information for translation. In addition, we use the syntactic structure of the sentence to add new connections to the tree-structured decoder neural network (Seq2DRNN+SynC). We compare our NMT model with sequential and state of the art syntax-based NMT models and show that our model produces more fluent translations with better reordering. Since our model is capable of doing translation and constituency parsing at the same time we also compare our parsing accuracy against other neural parsing models.

pdf
Simultaneous Translation using Optimized Segmentation
Maryam Siahbani | Hassan Shavarani | Ashkan Alinejad | Anoop Sarkar
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

2015

pdf
Learning segmentations that balance latency versus quality in spoken language translation
Hassan Shavarani | Maryam Siahbani | Ramtin Mehdizadeh Seraj | Anoop Sarkar
Proceedings of the 12th International Workshop on Spoken Language Translation: Papers