Wang Ling


2020

pdf
The DeepMind Chinese–English Document Translation System at WMT2020
Lei Yu | Laurent Sartran | Po-Sen Huang | Wojciech Stokowiec | Domenic Donato | Srivatsan Srinivasan | Alek Andreev | Wang Ling | Sona Mokra | Agustin Dal Lago | Yotam Doron | Susannah Young | Phil Blunsom | Chris Dyer
Proceedings of the Fifth Conference on Machine Translation

This paper describes the DeepMind submission to the ChineseEnglish constrained data track of the WMT2020 Shared Task on News Translation. The submission employs a noisy channel factorization as the backbone of a document translation system. This approach allows the flexible combination of a number of independent component models which are further augmented with back-translation, distillation, fine-tuning with in-domain data, Monte-Carlo Tree Search decoding, and improved uncertainty estimation. In order to address persistent issues with the premature truncation of long sequences we included specialized length models and sentence segmentation techniques. Our final system provides a 9.9 BLEU points improvement over a baseline Transformer on our test set (newstest 2019).

pdf
Better Document-Level Machine Translation with Bayes’ Rule
Lei Yu | Laurent Sartran | Wojciech Stokowiec | Wang Ling | Lingpeng Kong | Phil Blunsom | Chris Dyer
Transactions of the Association for Computational Linguistics, Volume 8

We show that Bayes’ rule provides an effective mechanism for creating document translation models that can be learned from only parallel sentences and monolingual documents a compelling benefit because parallel documents are not always available. In our formulation, the posterior probability of a candidate translation is the product of the unconditional (prior) probability of the candidate output document and the “reverse translation probability” of translating the candidate output back into the source language. Our proposed model uses a powerful autoregressive language model as the prior on target language documents, but it assumes that each sentence is translated independently from the target to the source language. Crucially, at test time, when a source document is observed, the document language model prior induces dependencies between the translations of the source sentences in the posterior. The model’s independence assumption not only enables efficient use of available data, but it additionally admits a practical left-to-right beam-search algorithm for carrying out inference. Experiments show that our model benefits from using cross-sentence context in the language model, and it outperforms existing document translation approaches.

2017

pdf
Reference-Aware Language Models
Zichao Yang | Phil Blunsom | Chris Dyer | Wang Ling
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We propose a general class of language models that treat reference as discrete stochastic latent variables. This decision allows for the creation of entity mentions by accessing external databases of referents (required by, e.g., dialogue generation) or past internal state (required to explicitly model coreferentiality). Beyond simple copying, our coreference model can additionally refer to a referent using varied mention forms (e.g., a reference to “Jane” can be realized as “she”), a characteristic feature of reference in natural languages. Experiments on three representative applications show our model variants outperform models based on deterministic attention and standard language modeling baselines.

pdf
Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems
Wang Ling | Dani Yogatama | Chris Dyer | Phil Blunsom
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Solving algebraic word problems requires executing a series of arithmetic operations—a program—to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.

2016

pdf
INESC-ID at SemEval-2016 Task 4-A: Reducing the Problem of Out-of-Embedding Words
Silvio Amir | Ramon F. Astudillo | Wang Ling | Mário J. Silva | Isabel Trancoso
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
Semantic Parsing with Semi-Supervised Sequential Autoencoders
Tomáš Kočiský | Gábor Melis | Edward Grefenstette | Chris Dyer | Wang Ling | Phil Blunsom | Karl Moritz Hermann
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Neural Network-Based Abstract Generation for Opinions and Arguments
Lu Wang | Wang Ling
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Mining Parallel Corpora from Sina Weibo and Twitter
Wang Ling | Luís Marujo | Chris Dyer | Alan W. Black | Isabel Trancoso
Computational Linguistics, Volume 42, Issue 2 - June 2016

pdf
Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning
Yulia Tsvetkov | Manaal Faruqui | Wang Ling | Brian MacWhinney | Chris Dyer
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Latent Predictor Networks for Code Generation
Wang Ling | Phil Blunsom | Edward Grefenstette | Karl Moritz Hermann | Tomáš Kočiský | Fumin Wang | Andrew Senior
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf
INESC-ID: A Regression Model for Large Scale Twitter Sentiment Lexicon Induction
Silvio Amir | Ramon F. Astudillo | Wang Ling | Bruno Martins | Mario J. Silva | Isabel Trancoso
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

pdf
INESC-ID: Sentiment Analysis without Hand-Coded Features or Linguistic Resources using Embedding Subspaces
Ramon F. Astudillo | Silvio Amir | Wang Ling | Bruno Martins | Mario J. Silva | Isabel Trancoso
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

pdf
Two/Too Simple Adaptations of Word2Vec for Syntax Problems
Wang Ling | Chris Dyer | Alan W. Black | Isabel Trancoso
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Not All Contexts Are Created Equal: Better Word Representations with Variable Attention
Wang Ling | Yulia Tsvetkov | Silvio Amir | Ramón Fermandez | Chris Dyer | Alan W Black | Isabel Trancoso | Chu-Cheng Lin
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation
Wang Ling | Chris Dyer | Alan W Black | Isabel Trancoso | Ramón Fermandez | Silvio Amir | Luís Marujo | Tiago Luís
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Evaluation of Word Vector Representations by Subspace Alignment
Yulia Tsvetkov | Manaal Faruqui | Wang Ling | Guillaume Lample | Chris Dyer
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Transition-Based Dependency Parsing with Stack Long Short-Term Memory
Chris Dyer | Miguel Ballesteros | Wang Ling | Austin Matthews | Noah A. Smith
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf
Learning Word Representations from Scarce and Noisy Data with Embedding Subspaces
Ramon F. Astudillo | Silvio Amir | Wang Ling | Mário Silva | Isabel Trancoso
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf
Automatic Keyword Extraction on Twitter
Luís Marujo | Wang Ling | Isabel Trancoso | Chris Dyer | Alan W. Black | Anatole Gershman | David Martins de Matos | João Neto | Jaime Carbonell
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf
Dual Subtitles as Parallel Corpora
Shikun Zhang | Wang Ling | Chris Dyer
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper, we leverage the existence of dual subtitles as a source of parallel data. Dual subtitles present viewers with two languages simultaneously, and are generally aligned in the segment level, which removes the need to automatically perform this alignment. This is desirable as extracted parallel data does not contain alignment errors present in previous work that aligns different subtitle files for the same movie. We present a simple heuristic to detect and extract dual subtitles and show that more than 20 million sentence pairs can be extracted for the Mandarin-English language pair. We also show that extracting data from this source can be a viable solution for improving Machine Translation systems in the domain of subtitles.

pdf
Linguistic Evaluation of Support Verb Constructions by OpenLogos and Google Translate
Anabela Barreiro | Johanna Monti | Brigitte Orliac | Susanne Preuß | Kutz Arrieta | Wang Ling | Fernando Batista | Isabel Trancoso
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper presents a systematic human evaluation of translations of English support verb constructions produced by a rule-based machine translation (RBMT) system (OpenLogos) and a statistical machine translation (SMT) system (Google Translate) for five languages: French, German, Italian, Portuguese and Spanish. We classify support verb constructions by means of their syntactic structure and semantic behavior and present a qualitative analysis of their translation errors. The study aims to verify how machine translation (MT) systems translate fine-grained linguistic phenomena, and how well-equipped they are to produce high-quality translation. Another goal of the linguistically motivated quality analysis of SVC raw output is to reinforce the need for better system hybridization, which leverages the strengths of RBMT to the benefit of SMT, especially in improving the translation of multiword units. Taking multiword units into account, we propose an effective method to achieve MT hybridization based on the integration of semantico-syntactic knowledge into SMT.

pdf
Crowdsourcing High-Quality Parallel Data Extraction from Twitter
Wang Ling | Luís Marujo | Chris Dyer | Alan W. Black | Isabel Trancoso
Proceedings of the Ninth Workshop on Statistical Machine Translation

2013

pdf
Paraphrasing 4 Microblog Normalization
Wang Ling | Chris Dyer | Alan W Black | Isabel Trancoso
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Microblogs as Parallel Corpora
Wang Ling | Guang Xiang | Chris Dyer | Alan Black | Isabel Trancoso
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
The CMU Machine Translation Systems at WMT 2013: Syntax, Synthetic Translation Options, and Pseudo-References
Waleed Ammar | Victor Chahuneau | Michael Denkowski | Greg Hanneman | Wang Ling | Austin Matthews | Kenton Murray | Nicola Segall | Alon Lavie | Chris Dyer
Proceedings of the Eighth Workshop on Statistical Machine Translation

2012

pdf
Improving Relative-Entropy Pruning using Statistical Significance
Wang Ling | Nadi Tomeh | Guang Xiang | Isabel Trancoso | Alan Black
Proceedings of COLING 2012: Posters

pdf
Recognition of Named-Event Passages in News Articles
Luis Marujo | Wang Ling | Anatole Gershman | Jaime Carbonell | João P. Neto | David Matos
Proceedings of COLING 2012: Demonstration Papers

pdf
Entropy-based Pruning for Phrase-based Machine Translation
Wang Ling | João Graça | Isabel Trancoso | Alan Black
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf
BP2EP - Adaptation of Brazilian Portuguese texts to European Portuguese
Luis Marujo | Nuno Grazina | Tiago Luis | Wang Ling | Luisa Coheur | Isabel Trancoso
Proceedings of the 15th Annual Conference of the European Association for Machine Translation

pdf bib
Named entity translation using anchor texts
Wang Ling | Pável Calado | Bruno Martins | Isabel Trancoso | Alan Black | Luísa Coheur
Proceedings of the 8th International Workshop on Spoken Language Translation: Papers

This work describes a process to extract Named Entity (NE) translations from the text available in web links (anchor texts). It translates a NE by retrieving a list of web documents in the target language, extracting the anchor texts from the links to those documents and finding the best translation from the anchor texts, using a combination of features, some of which, are specific to anchor texts. Experiments performed on a manually built corpora, suggest that over 70% of the NEs, ranging from unpopular to popular entities, can be translated correctly using sorely anchor texts. Tests on a Machine Translation task indicate that the system can be used to improve the quality of the translations of state-of-the-art statistical machine translation systems.

pdf
Reordering Modeling using Weighted Alignment Matrices
Wang Ling | Tiago Luís | João Graça | Isabel Trancoso | Luísa Coheur
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Discriminative Phrase-based Lexicalized Reordering Models using Weighted Reordering Graphs
Wang Ling | João Graça | David Martins de Matos | Isabel Trancoso | Alan W Black
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf
The INESC-ID machine translation system for the IWSLT 2010
Wang Ling | Tiago Luís | João Graça | Luísa Coheur | Isabel Trancoso
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign

In this paper we describe the Instituto de Engenharia de Sistemas e Computadores Investigac ̧a ̃o e Desenvolvimento (INESC-ID) system that participated in the IWSLT 2010 evaluation campaign. Our main goal for this evaluation was to employ several state-of-the-art methods applied to phrase-based machine translation in order to improve the translation quality. Aside from the IBM M4 alignment model, two constrained alignment models were tested, which produced better overall results. These results were further improved by using weighted alignment matrixes during phrase extraction, rather than the single best alignment. Finally, we tested several filters that ruled out phrase pairs based on puntuation. Our system was evaluated on the BTEC and DIALOG tasks, having achieved a better overall ranking in the DIALOG task.

pdf
Towards a general and extensible phrase-extraction algorithm
Wang Ling | Tiago Luís | João Graça | Luísa Coheur | Isabel Trancoso
Proceedings of the 7th International Workshop on Spoken Language Translation: Papers

Phrase-based systems deeply depend on the quality of their phrase tables and therefore, the process of phrase extraction is always a fundamental step. In this paper we present a general and extensible phrase extraction algorithm, where we have highlighted several control points. The instantiation of these control points allows the simulation of previous approaches, as in each one of these points different strategies/heuristics can be tested. We show how previous approaches fit in this algorithm, compare several of them and, in addition, we propose alternative heuristics, showing their impact on the final translation results. Considering two different test scenarios from the IWSLT 2010 competition (BTEC, Fr-En and DIALOG, Cn-En), we have obtained an improvement in the results of 2.4 and 2.8 BLEU points, respectively.