Andreas Maletti


2021

pdf bib
Strong Equivalence of TAG and CCG
Lena Katharina Schiffer | Andreas Maletti
Transactions of the Association for Computational Linguistics, Volume 9

Tree-adjoining grammar (TAG) and combinatory categorial grammar (CCG) are two well-established mildly context-sensitive grammar formalisms that are known to have the same expressive power on strings (i.e., generate the same class of string languages). It is demonstrated that their expressive power on trees also essentially coincides. In fact, CCG without lexicon entries for the empty string and only first-order rules of degree at most 2 are sufficient for its full expressive power.

2019

pdf bib
Proceedings of the 14th International Conference on Finite-State Methods and Natural Language Processing
Heiko Vogler | Andreas Maletti
Proceedings of the 14th International Conference on Finite-State Methods and Natural Language Processing

2018

pdf bib
Recurrent Neural Networks as Weighted Language Recognizers
Yining Chen | Sorcha Gilroy | Andreas Maletti | Jonathan May | Kevin Knight
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

We investigate the computational complexity of various problems for simple recurrent neural networks (RNNs) as formal models for recognizing weighted languages. We focus on the single-layer, ReLU-activation, rational-weight RNNs with softmax, which are commonly used in natural language processing applications. We show that most problems for such RNNs are undecidable, including consistency, equivalence, minimization, and the determination of the highest-weighted string. However, for consistent RNNs the last problem becomes decidable, although the solution length can surpass all computable bounds. If additionally the string is limited to polynomial length, the problem becomes NP-complete. In summary, this shows that approximations and heuristic algorithms are necessary in practical applications of those RNNs.

2016

pdf bib
Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata
Bryan Jurish | Andreas Maletti | Kay-Michael Würzner | Uwe Springmann
Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata

2015

pdf bib
Discontinuous Statistical Machine Translation with Target-Side Dependency Syntax
Nina Seemann | Andreas Maletti
Proceedings of the Tenth Workshop on Statistical Machine Translation

pdf bib
Extended Tree Transducers in Natural Language Processing
Andreas Maletti
Proceedings of the 12th International Conference on Finite-State Methods and Natural Language Processing 2015 (FSMNLP 2015 Düsseldorf)

pdf bib
String-to-Tree Multi Bottom-up Tree Transducers
Nina Seemann | Fabienne Braune | Andreas Maletti
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
A systematic evaluation of MBOT in statistical machine translation
Nina Seemann | Fabienne Braune | Andreas Maletti
Proceedings of Machine Translation Summit XV: Papers

2014

pdf bib
A tunable language model for statistical machine translation
Junfei Guo | Juan Liu | Qi Han | Andreas Maletti
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

A novel variation of modified KNESER-NEY model using monomial discounting is presented and integrated into the MOSES statistical machine translation toolkit. The language model is trained on a large training set as usual, but its new discount parameters are tuned to the small development set. An in-domain and cross-domain evaluation of the language model is performed based on perplexity, in which sizable improvements are obtained. Additionally, the performance of the language model is also evaluated in several major machine translation tasks including Chinese-to-English. In those tests, the test data is from a (slightly) different domain than the training data. The experimental results indicate that the new model significantly outperforms a baseline model using SRILM in those domain adaptation scenarios. The new language model is thus ideally suited for domain adaptation without sacrificing performance on in-domain experiments.

pdf bib
Proceedings of the 2014 Joint Meeting of SIGMORPHON and SIGFSM
Özlem Çetinoğlu | Jeffrey Heinz | Andreas Maletti | Jason Riggle
Proceedings of the 2014 Joint Meeting of SIGMORPHON and SIGFSM

2013

pdf bib
Shallow Local Multi-Bottom-up Tree Transducers in Statistical Machine Translation
Fabienne Braune | Nina Seemann | Daniel Quernheim | Andreas Maletti
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Strong Lexicalization of Tree Adjoining Grammars
Andreas Maletti | Joost Engelfriet
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Every sensible extended top-down tree transducer is a multi bottom-up tree transducer
Andreas Maletti
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Preservation of Recognizability for Weighted Linear Extended Top-Down Tree Transducers
Nina Seemann | Daniel Quernheim | Fabienne Braune | Andreas Maletti
Proceedings of the Workshop on Applications of Tree Automata Techniques in Natural Language Processing

pdf bib
Composing extended top-down tree transducers
Aurélie Lagoutte | Fabienne Braune | Daniel Quernheim | Andreas Maletti
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf bib
How to train your multi bottom-up tree transducer
Andreas Maletti
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Proceedings of the 9th International Workshop on Finite State Methods and Natural Language Processing
Andreas Maletti | Matthieu Constant
Proceedings of the 9th International Workshop on Finite State Methods and Natural Language Processing

2010

pdf bib
Why Synchronous Tree Substitution Grammars?
Andreas Maletti
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
A Tree Transducer Model for Synchronous Tree-Adjoining Grammars
Andreas Maletti
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Preservation of Recognizability for Synchronous Tree Substitution Grammars
Zoltán Fülöp | Andreas Maletti | Heiko Vogler
Proceedings of the 2010 Workshop on Applications of Tree Automata in Natural Language Processing

pdf bib
Parsing and Translation Algorithms Based on Weighted Extended Tree Transducers
Andreas Maletti | Giorgio Satta
Proceedings of the 2010 Workshop on Applications of Tree Automata in Natural Language Processing

2009

pdf bib
Parsing Algorithms based on Tree Automata
Andreas Maletti | Giorgio Satta
Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09)