Liang Huang


2022

pdf
PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit
Hui Zhang | Tian Yuan | Junkun Chen | Xintong Li | Renjie Zheng | Yuxin Huang | Xiaojie Chen | Enlei Gong | Zeyu Chen | Xiaoguang Hu | Dianhai Yu | Yanjun Ma | Liang Huang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations

PaddleSpeech is an open-source all-in-one speech toolkit. It aims at facilitating the development and research of speech processing technologies by providing an easy-to-use command-line interface and a simple code structure. This paper describes the design philosophy and core architecture of PaddleSpeech to support several essential speech-to-text and text-to-speech tasks. PaddleSpeech achieves competitive or state-of-the-art performance on various speech datasets and implements the most popular methods. It also provides recipes and pretrained models to quickly reproduce the experimental results in this paper. PaddleSpeech is publicly avaiable at https://github.com/PaddlePaddle/PaddleSpeech.

pdf bib
Findings of the Third Workshop on Automatic Simultaneous Translation
Ruiqing Zhang | Chuanqiang Zhang | Zhongjun He | Hua Wu | Haifeng Wang | Liang Huang | Qun Liu | Julia Ive | Wolfgang Macherey
Proceedings of the Third Workshop on Automatic Simultaneous Translation

This paper reports the results of the shared task we hosted on the Third Workshop of Automatic Simultaneous Translation (AutoSimTrans). The shared task aims to promote the development of text-to-text and speech-to-text simultaneous translation, and includes Chinese-English and English-Spanish tracks. The number of systems submitted this year has increased fourfold compared with last year. Additionally, the top 1 ranked system in the speech-to-text track is the first end-to-end submission we have received in the past three years, which has shown great potential. This paper reports the results and descriptions of the 14 participating teams, compares different evaluation metrics, and revisits the ranking method.

2021

pdf
Direct Simultaneous Speech-to-Text Translation Assisted by Synchronized Streaming ASR
Junkun Chen | Mingbo Ma | Renjie Zheng | Liang Huang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Improving Simultaneous Translation by Incorporating Pseudo-References with Fewer Reorderings
Junkun Chen | Renjie Zheng | Atsuhito Kita | Mingbo Ma | Liang Huang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Simultaneous translation is vastly different from full-sentence translation, in the sense that it starts translation before the source sentence ends, with only a few words delay. However, due to the lack of large-scale, high-quality simultaneous translation datasets, most such systems are still trained on conventional full-sentence bitexts. This is far from ideal for the simultaneous scenario due to the abundance of unnecessary long-distance reorderings in those bitexts. We propose a novel method that rewrites the target side of existing full-sentence corpora into simultaneous-style translation. Experiments on ZhEn and JaEn simultaneous translation show substantial improvements (up to +2.7 BLEU) with the addition of these generated pseudo-references.

pdf bib
Proceedings of the Second Workshop on Automatic Simultaneous Translation
Hua Wu | Colin Cherry | Liang Huang | Zhongjun He | Qun Liu | Maha Elbayad | Mark Liberman | Haifeng Wang | Mingbo Ma | Ruiqing Zhang
Proceedings of the Second Workshop on Automatic Simultaneous Translation

2020

pdf bib
Proceedings of the First Workshop on Automatic Simultaneous Translation
Hua Wu | Colin Cherry | Liang Huang | Zhongjun He | Mark Liberman | James Cross | Yang Liu
Proceedings of the First Workshop on Automatic Simultaneous Translation

pdf
Opportunistic Decoding with Timely Correction for Simultaneous Translation
Renjie Zheng | Mingbo Ma | Baigong Zheng | Kaibo Liu | Liang Huang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently. Most existing frameworks, however, have difficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conservative. We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information. At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality. Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation.

pdf
Simultaneous Translation Policies: From Fixed to Adaptive
Baigong Zheng | Kaibo Liu | Renjie Zheng | Mingbo Ma | Hairong Liu | Liang Huang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information. But previous methods on obtaining adaptive policies either rely on complicated training process, or underperform simple fixed policies. We design an algorithm to achieve adaptive policies via a simple heuristic composition of a set of fixed policies. Experiments on Chinese -> English and German -> English show that our adaptive policies can outperform fixed ones by up to 4 BLEU points for the same latency, and more surprisingly, it even surpasses the BLEU score of full-sentence translation in the greedy mode (and very close to beam mode), but with much lower latency.

pdf
Simultaneous Translation
Liang Huang | Colin Cherry | Mingbo Ma | Naveen Arivazhagan | Zhongjun He
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts

Simultaneous translation, which performs translation concurrently with the source speech, is widely useful in many scenarios such as international conferences, negotiations, press releases, legal proceedings, and medicine. This problem has long been considered one of the hardest problems in AI and one of its holy grails. Recently, with rapid improvements in machine translation, speech recognition, and speech synthesis, there has been exciting progress towards simultaneous translation. This tutorial will focus on the design and evaluation of policies for simultaneous translation, to leave attendees with a deep technical understanding of the history, the recent advances, and the remaining challenges in this field.

pdf
Context-aware Stand-alone Neural Spelling Correction
Xiangci Li | Hairong Liu | Liang Huang
Findings of the Association for Computational Linguistics: EMNLP 2020

Existing natural language processing systems are vulnerable to noisy inputs resulting from misspellings. On the contrary, humans can easily infer the corresponding correct words from their misspellings and surrounding context. Inspired by this, we address the stand-alone spelling correction problem, which only corrects the spelling of each token without additional token insertion or deletion, by utilizing both spelling information and global context representations. We present a simple yet powerful solution that jointly detects and corrects misspellings as a sequence labeling task by fine-turning a pre-trained language model. Our solution outperform the previous state-of-the-art result by 12.8% absolute F0.5 score.

pdf
Incremental Text-to-Speech Synthesis with Prefix-to-Prefix Framework
Mingbo Ma | Baigong Zheng | Kaibo Liu | Renjie Zheng | Hairong Liu | Kainan Peng | Kenneth Church | Liang Huang
Findings of the Association for Computational Linguistics: EMNLP 2020

Text-to-speech synthesis (TTS) has witnessed rapid progress in recent years, where neural methods became capable of producing audios with high naturalness. However, these efforts still suffer from two types of latencies: (a) the computational latency (synthesizing time), which grows linearly with the sentence length, and (b) the input latency in scenarios where the input text is incrementally available (such as in simultaneous translation, dialog generation, and assistive technologies). To reduce these latencies, we propose a neural incremental TTS approach using the prefix-to-prefix framework from simultaneous translation. We synthesize speech in an online fashion, playing a segment of audio while generating the next, resulting in an O(1) rather than O(n) latency. Experiments on English and Chinese TTS show that our approach achieves similar speech naturalness compared to full sentence TTS, but only with a constant (1-2 words) latency.

pdf
Fluent and Low-latency Simultaneous Speech-to-Speech Translation with Self-adaptive Training
Renjie Zheng | Mingbo Ma | Baigong Zheng | Kaibo Liu | Jiahong Yuan | Kenneth Church | Liang Huang
Findings of the Association for Computational Linguistics: EMNLP 2020

Simultaneous speech-to-speech translation is an extremely challenging but widely useful scenario that aims to generate target-language speech only a few seconds behind the source-language speech. In addition, we have to continuously translate a speech of multiple sentences, but all recent solutions merely focus on the single-sentence scenario. As a result, current approaches will accumulate more and more latencies in later sentences when the speaker talks faster and introduce unnatural pauses into translated speech when the speaker talks slower. To overcome these issues, we propose Self-Adaptive Translation which flexibly adjusts the length of translations to accommodate different source speech rates. At similar levels of translation quality (as measured by BLEU), our method generates more fluent target speech latency than the baseline, in both Zh<->En directions.

2019

pdf
Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation
Baigong Zheng | Renjie Zheng | Mingbo Ma | Liang Huang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Simultaneous translation is widely useful but remains challenging. Previous work falls into two main categories: (a) fixed-latency policies such as Ma et al. (2019) and (b) adaptive policies such as Gu et al. (2017). The former are simple and effective, but have to aggressively predict future content due to diverging source-target word order; the latter do not anticipate, but suffer from unstable and inefficient training. To combine the merits of both approaches, we propose a simple supervised-learning framework to learn an adaptive policy from oracle READ/WRITE sequences generated from parallel text. At each step, such an oracle sequence chooses to WRITE the next target word if the available source sentence context provides enough information to do so, otherwise READ the next source word. Experiments on German<=>English show that our method, without retraining the underlying NMT model, can learn flexible policies with better BLEU scores and similar latencies compared to previous work.

pdf
Speculative Beam Search for Simultaneous Translation
Renjie Zheng | Mingbo Ma | Baigong Zheng | Liang Huang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Beam search is universally used in (full-sentence) machine translation but its application to simultaneous translation remains highly non-trivial, where output words are committed on the fly. In particular, the recently proposed wait-k policy (Ma et al., 2018) is a simple and effective method that (after an initial wait) commits one output word on receiving each input word, making beam search seemingly inapplicable. To address this challenge, we propose a new speculative beam search algorithm that hallucinates several steps into the future in order to reach a more accurate decision by implicitly benefiting from a target language model. This idea makes beam search applicable for the first time to the generation of a single word in each step. Experiments over diverse language pairs show large improvement compared to previous work.

pdf
Learning to Stop in Structured Prediction for Neural Machine Translation
Mingbo Ma | Renjie Zheng | Liang Huang
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Beam search optimization (Wiseman and Rush, 2016) resolves many issues in neural machine translation. However, this method lacks principled stopping criteria and does not learn how to stop during training, and the model naturally prefers longer hypotheses during the testing time in practice since they use the raw score instead of the probability-based score. We propose a novel ranking method which enables an optimal beam search stop- ping criteria. We further introduce a structured prediction loss function which penalizes suboptimal finished candidates produced by beam search during training. Experiments of neural machine translation on both synthetic data and real languages (German→English and Chinese→English) demonstrate our pro- posed methods lead to better length and BLEU score.

pdf
STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework
Mingbo Ma | Liang Huang | Hao Xiong | Renjie Zheng | Kaibo Liu | Baigong Zheng | Chuanqiang Zhang | Zhongjun He | Hairong Liu | Xing Li | Hua Wu | Haifeng Wang
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Simultaneous translation, which translates sentences before they are finished, is use- ful in many scenarios but is notoriously dif- ficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we pro- pose a novel prefix-to-prefix framework for si- multaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very sim- ple yet surprisingly effective “wait-k” policy trained to generate the target sentence concur- rently with the source sentence, but always k words behind. Experiments show our strat- egy achieves low latency and reasonable qual- ity (compared to full-sentence translation) on 4 directions: zh↔en and de↔en.

pdf
Robust Neural Machine Translation with Joint Textual and Phonetic Embedding
Hairong Liu | Mingbo Ma | Liang Huang | Hao Xiong | Zhongjun He
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice. One special kind of noise is the homophone noise, where words are replaced by other words with similar pronunciations. We propose to improve the robustness of NMT to homophone noises by 1) jointly embedding both textual and phonetic information of source sentences, and 2) augmenting the training dataset with homophone noises. Interestingly, to achieve better translation quality and more robustness, we found that most (though not all) weights should be put on the phonetic rather than textual information. Experiments show that our method not only significantly improves the robustness of NMT to homophone noises, but also surprisingly improves the translation quality on some clean test sets.

pdf
Simultaneous Translation with Flexible Policy via Restricted Imitation Learning
Baigong Zheng | Renjie Zheng | Mingbo Ma | Liang Huang
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Simultaneous translation is widely useful but remains one of the most difficult tasks in NLP. Previous work either uses fixed-latency policies, or train a complicated two-staged model using reinforcement learning. We propose a much simpler single model that adds a “delay” token to the target vocabulary, and design a restricted dynamic oracle to greatly simplify training. Experiments on Chinese <-> English simultaneous translation show that our work leads to flexible policies that achieve better BLEU scores and lower latencies compared to both fixed and RL-learned policies.

pdf
Robust Machine Translation with Domain Sensitive Pseudo-Sources: Baidu-OSU WMT19 MT Robustness Shared Task System Report
Renjie Zheng | Hairong Liu | Mingbo Ma | Baigong Zheng | Liang Huang
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)

This paper describes the machine translation system developed jointly by Baidu Research and Oregon State University for WMT 2019 Machine Translation Robustness Shared Task. Translation of social media is a very challenging problem, since its style is very different from normal parallel corpora (e.g. News) and also include various types of noises. To make it worse, the amount of social media parallel corpora is extremely limited. In this paper, we use a domain sensitive training method which leverages a large amount of parallel data from popular domains together with a little amount of parallel data from social media. Furthermore, we generate a parallel dataset with pseudo noisy source sentences which are back-translated from monolingual data using a model trained by a similar domain sensitive way. In this way, we achieve more than 10 BLEU improvement in both En-Fr and Fr-En translation compared with the baseline methods.

2018

pdf
Ensemble Sequence Level Training for Multimodal MT: OSU-Baidu WMT18 Multimodal Machine Translation System Report
Renjie Zheng | Yilin Yang | Mingbo Ma | Liang Huang
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

This paper describes multimodal machine translation systems developed jointly by Oregon State University and Baidu Research for WMT 2018 Shared Task on multimodal translation. In this paper, we introduce a simple approach to incorporate image information by feeding image features to the decoder side. We also explore different sequence level training methods including scheduled sampling and reinforcement learning which lead to substantial improvements. Our systems ensemble several models using different architectures and training methods and achieve the best performance for three subtasks: En-De and En-Cs in task 1 and (En+De+Fr)-Cs task 1B.

pdf
Linear-time Constituency Parsing with RNNs and Dynamic Programming
Juneki Hong | Liang Huang
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Recently, span-based constituency parsing has achieved competitive accuracies with extremely simple models by using bidirectional RNNs to model “spans”. However, the minimal span parser of Stern et al. (2017a) which holds the current state of the art accuracy is a chart parser running in cubic time, O(n3), which is too slow for longer sentences and for applications beyond sentence boundaries such as end-to-end discourse parsing and joint sentence boundary detection and parsing. We propose a linear-time constituency parser with RNNs and dynamic programming using graph-structured stack and beam search, which runs in time O(n b2) where b is the beam size. We further speed this up to O(n b log b) by integrating cube pruning. Compared with chart parsing baselines, this linear-time parser is substantially faster for long sentences on the Penn Treebank and orders of magnitude faster for discourse parsing, and achieves the highest F1 accuracy on the Penn Treebank among single model end-to-end systems.

pdf
Large Margin Neural Language Model
Jiaji Huang | Yi Li | Wei Ping | Liang Huang
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation. The proposed method aims to enlarge the margin between the “good” and “bad” sentences in a task-specific sense. It is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text. Compared with minimum-PPL training, our method gains up to 1.1 WER reduction for speech recognition and 1.0 BLEU increase for machine translation.

pdf
Breaking the Beam Search Curse: A Study of (Re-)Scoring Methods and Stopping Criteria for Neural Machine Translation
Yilin Yang | Liang Huang | Mingbo Ma
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Beam search is widely used in neural machine translation, and usually improves translation quality compared to greedy search. It has been widely observed that, however, beam sizes larger than 5 hurt translation quality. We explain why this happens, and propose several methods to address this problem. Furthermore, we discuss the optimal stopping criteria for these methods. Results show that our hyperparameter-free methods outperform the widely-used hyperparameter-free heuristic of length normalization by +2.0 BLEU, and achieve the best results among all methods on Chinese-to-English translation.

pdf
Multi-Reference Training with Pseudo-References for Neural Translation and Text Generation
Renjie Zheng | Mingbo Ma | Liang Huang
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Neural text generation, including neural machine translation, image captioning, and summarization, has been quite successful recently. However, during training time, typically only one reference is considered for each example, even though there are often multiple references available, e.g., 4 references in NIST MT evaluations, and 5 references in image captioning data. We first investigate several different ways of utilizing multiple human references during training. But more importantly, we then propose an algorithm to generate exponentially many pseudo-references by first compressing existing human references into lattices and then traversing them to generate new pseudo-references. These approaches lead to substantial improvements over strong baselines in both machine translation (+1.5 BLEU) and image captioning (+3.1 BLEU / +11.7 CIDEr).

pdf
Speeding Up Neural Machine Translation Decoding by Cube Pruning
Wen Zhang | Liang Huang | Yang Feng | Lei Shen | Qun Liu
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Although neural machine translation has achieved promising results, it suffers from slow translation speed. The direct consequence is that a trade-off has to be made between translation quality and speed, thus its performance can not come into full play. We apply cube pruning, a popular technique to speed up dynamic programming, into neural machine translation to speed up the translation. To construct the equivalence class, similar target hidden states are combined, leading to less RNN expansion operations on the target side and less softmax operations over the large target vocabulary. The experiments show that, at the same or even better translation quality, our method can translate faster compared with naive beam search by 3.3x on GPUs and 3.5x on CPUs.

2017

pdf
Group Sparse CNNs for Question Classification with Answer Sets
Mingbo Ma | Liang Huang | Bing Xiang | Bowen Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Question classification is an important task with wide applications. However, traditional techniques treat questions as general sentences, ignoring the corresponding answer data. In order to consider answer information into question modeling, we first introduce novel group sparse autoencoders which refine question representation by utilizing group information in the answer set. We then propose novel group sparse CNNs which naturally learn question representation with respect to their answers by implanting group sparse autoencoders into traditional CNNs. The proposed model significantly outperform strong baselines on four datasets.

pdf
OSU Multimodal Machine Translation System Report
Mingbo Ma | Dapeng Li | Kai Zhao | Liang Huang
Proceedings of the Second Conference on Machine Translation

pdf bib
Fast(er) Exact Decoding and Global Training for Transition-Based Dependency Parsing via a Minimal Feature Set
Tianze Shi | Liang Huang | Lillian Lee
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We first present a minimal feature set for transition-based dependency parsing, continuing a recent trend started by Kiperwasser and Goldberg (2016a) and Cross and Huang (2016a) of using bi-directional LSTM features. We plug our minimal feature set into the dynamic-programming framework of Huang and Sagae (2010) and Kuhlmann et al. (2011) to produce the first implementation of worst-case O(n3) exact decoders for arc-hybrid and arc-eager transition systems. With our minimal features, we also present O(n3) global training methods. Finally, using ensembles including our new parsers, we achieve the best unlabeled attachment score reported (to our knowledge) on the Chinese Treebank and the “second-best-in-class” result on the English Penn Treebank.

pdf
Joint Syntacto-Discourse Parsing and the Syntacto-Discourse Treebank
Kai Zhao | Liang Huang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Discourse parsing has long been treated as a stand-alone problem independent from constituency or dependency parsing. Most attempts at this problem rely on annotated text segmentations (Elementary Discourse Units, EDUs) and sophisticated sparse or continuous features to extract syntactic information. In this paper we propose the first end-to-end discourse parser that jointly parses in both syntax and discourse levels, as well as the first syntacto-discourse treebank by integrating the Penn Treebank and the RST Treebank. Built upon our recent span-based constituency parser, this joint syntacto-discourse parser requires no preprocessing efforts such as segmentation or feature extraction, making discourse parsing more convenient. Empirically, our parser achieves the state-of-the-art end-to-end discourse parsing accuracy.

pdf
When to Finish? Optimal Beam Search for Neural Text Generation (modulo beam size)
Liang Huang | Kai Zhao | Mingbo Ma
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

In neural text generation such as neural machine translation, summarization, and image captioning, beam search is widely used to improve the output text quality. However, in the neural generation setting, hypotheses can finish in different steps, which makes it difficult to decide when to end beam search to ensure optimality. We propose a provably optimal beam search algorithm that will always return the optimal-score complete hypothesis (modulo beam size), and finish as soon as the optimality is established. To counter neural generation’s tendency for shorter hypotheses, we also introduce a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal. Experiments on neural machine translation demonstrate that our principled beam search algorithm leads to improvement in BLEU score over previously proposed alternatives.

2016

pdf bib
Span-Based Constituency Parsing with a Structure-Label System and Provably Optimal Dynamic Oracles
James Cross | Liang Huang
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Textual Entailment with Structured Attentions and Composition
Kai Zhao | Liang Huang | Mingbo Ma
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.

pdf
Incremental Parsing with Minimal Features Using Bi-Directional LSTM
James Cross | Liang Huang
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
Event Nugget Detection with Forward-Backward Recurrent Neural Networks
Reza Ghaeini | Xiaoli Fern | Liang Huang | Prasad Tadepalli
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2015

pdf
A pilot study towards end-to-end MT training
Feifei Zhai | Liang Huang
Proceedings of Machine Translation Summit XV: Papers

pdf
Discriminative Unsupervised Alignment of Natural Language Instructions with Corresponding Video Segments
Iftekhar Naim | Young C. Song | Qiguang Liu | Liang Huang | Henry Kautz | Jiebo Luo | Daniel Gildea
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Shift-Reduce Constituency Parsing with Dynamic Programming and POS Tag Lattice
Haitao Mi | Liang Huang
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Type-Driven Incremental Semantic Parsing with Polymorphism
Kai Zhao | Liang Huang
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Dependency-based Convolutional Neural Networks for Sentence Embedding
Mingbo Ma | Liang Huang | Bowen Zhou | Bing Xiang
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf
Scalable Large-Margin Structured Learning: Theory and Algorithms
Liang Huang | Kai Zhao
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing: Tutorial Abstracts

pdf
Search-Aware Tuning for Hierarchical Phrase-based Decoding
Feifei Zhai | Liang Huang | Kai Zhao
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Automatic Adaptation of Annotations
Wenbin Jiang | Yajuan Lü | Liang Huang | Qun Liu
Computational Linguistics, Volume 41, Issue 1 - March 2015

2014

pdf
Search-Aware Tuning for Machine Translation
Lemao Liu | Liang Huang
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf
Hierarchical MT Training using Max-Violation Perceptron
Kai Zhao | Liang Huang | Haitao Mi | Abe Ittycheriah
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Scalable Large-Margin Structured Learning: Theory and Algorithms
Liang Huang | Kai Zhao | Lemao Liu
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Tutorials

pdf
A Structured Language Model for Incremental Tree-to-String Translation
Heng Yu | Haitao Mi | Liang Huang | Qun Liu
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf
Optimal Incremental Parsing via Best-First Dynamic Programming
Kai Zhao | James Cross | Liang Huang
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Online Learning for Inexact Hypergraph Search
Hao Zhang | Liang Huang | Kai Zhao | Ryan McDonald
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Max-Violation Perceptron and Forced Decoding for Scalable MT Training
Heng Yu | Liang Huang | Haitao Mi | Kai Zhao
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Proceedings of the 13th International Conference on Parsing Technologies (IWPT 2013)
Harry Bunt | Khalil Sima'an | Liang Huang
Proceedings of the 13th International Conference on Parsing Technologies (IWPT 2013)

pdf
Minibatch and Parallelization for Online Large Margin Structured Learning
Kai Zhao | Liang Huang
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Joint Event Extraction via Structured Prediction with Global Features
Qi Li | Heng Ji | Liang Huang
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Efficient Implementation of Beam-Search Incremental Parsers
Yoav Goldberg | Kai Zhao | Liang Huang
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf
Structured Perceptron with Inexact Search
Liang Huang | Suphan Fayong | Yang Guo
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0-norm
Ashish Vaswani | Liang Huang | David Chiang
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2011

pdf
Rule Markov Models for Fast Tree-to-String Translation
Ashish Vaswani | Haitao Mi | Liang Huang | David Chiang
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf
Efficient Incremental Decoding for Tree-to-String Translation
Liang Huang | Haitao Mi
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Machine Translation with Lattices and Forests
Haitao Mi | Liang Huang | Qun Liu
Coling 2010: Posters

pdf
Dynamic Programming for Linear-Time Incremental Parsing
Liang Huang | Kenji Sagae
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Tree-Based and Forest-Based Translation
Yang Liu | Liang Huang
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts

2009

pdf
Dynamic Programming-based Search Algorithms in NLP
Liang Huang
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Tutorial Abstracts

pdf
Automatic Adaptation of Annotation Standards: Chinese Word Segmentation and POS Tagging – A Case Study
Wenbin Jiang | Liang Huang | Qun Liu
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf
Bilingually-Constrained (Monolingual) Shift-Reduce Parsing
Liang Huang | Wenbin Jiang | Qun Liu
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Binarization of Synchronous Context-Free Grammars
Liang Huang | Hao Zhang | Daniel Gildea | Kevin Knight
Computational Linguistics, Volume 35, Number 4, December 2009

2008

pdf bib
Coling 2008: Advanced Dynamic Programming in Computational Linguistics: Theory, Algorithms and Applications - Tutorial notes
Liang Huang
Coling 2008: Advanced Dynamic Programming in Computational Linguistics: Theory, Algorithms and Applications - Tutorial notes

pdf bib
Advanced Dynamic Programming in Semiring and Hypergraph Frameworks
Liang Huang
Coling 2008: Advanced Dynamic Programming in Computational Linguistics: Theory, Algorithms and Applications - Tutorial notes

pdf
Forest-Based Translation
Haitao Mi | Liang Huang | Qun Liu
Proceedings of ACL-08: HLT

pdf
Forest Reranking: Discriminative Parsing with Non-Local Features
Liang Huang
Proceedings of ACL-08: HLT

pdf
A Cascaded Linear Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging
Wenbin Jiang | Liang Huang | Qun Liu | Yajuan Lü
Proceedings of ACL-08: HLT

pdf
Forest-based Translation Rule Extraction
Haitao Mi | Liang Huang
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf
Forest Rescoring: Faster Decoding with Integrated Language Models
Liang Huang | David Chiang
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf
Binarization, Synchronous Binarization, and Target-side Binarization
Liang Huang
Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation

2006

pdf
Synchronous Binarization for Machine Translation
Hao Zhang | Liang Huang | Daniel Gildea | Kevin Knight
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf
Efficient Algorithms for Richer Formalisms: Parsing and Machine Translation
Liang Huang
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium

pdf
Statistical Syntax-Directed Translation with Extended Domain of Locality
Liang Huang | Kevin Knight | Aravind Joshi
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers

In syntax-directed translation, the source-language input is first parsed into a parse-tree, which is then recursively converted into a string in the target-language. We model this conversion by an extended tree-to-string transducer that has multi-level trees on the source-side, which gives our system more expressive power and flexibility. We also define a direct probability model and use a linear-time dynamic programming algorithm to search for the best derivation. The model is then extended to the general log-linear frame-work in order to incorporate other features like n-gram language models. We devise a simple-yet-effective algorithm to generate non-duplicate k-best translations for n-gram rescoring. Preliminary experiments on English-to-Chinese translation show a significant improvement in terms of translation quality compared to a state-of-the- art phrase-based system.

pdf bib
A Syntax-Directed Translator with Extended Domain of Locality
Liang Huang | Kevin Knight | Aravind Joshi
Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing

2005

pdf
Better k-best Parsing
Liang Huang | David Chiang
Proceedings of the Ninth International Workshop on Parsing Technology

pdf
Machine Translation as Lexicalized Parsing with Hooks
Liang Huang | Hao Zhang | Daniel Gildea
Proceedings of the Ninth International Workshop on Parsing Technology

2002

pdf
PCFG Parsing for Restricted Classical Chinese Texts
Liang Huang | Yinan Peng | Huan Wang | Zhenyu Wu
COLING-02: The First SIGHAN Workshop on Chinese Language Processing