Kuzman Ganchev


2023

pdf
Text-Blueprint: An Interactive Platform for Plan-based Conditional Generation
Fantine Huot | Joshua Maynez | Shashi Narayan | Reinald Kim Amplayo | Kuzman Ganchev | Annie Priyadarshini Louis | Anders Sandholm | Dipanjan Das | Mirella Lapata
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations

While conditional generation models can now generate natural language well enough to create fluent text, it is still difficult to control the generation process, leading to irrelevant, repetitive, and hallucinated content. Recent work shows that planning can be a useful intermediate step to render conditional generation less opaque and more grounded. We present a web browser-based demonstration for query-focused summarization that uses a sequence of question-answer pairs, as a blueprint plan for guiding text generation (i.e., what to say and in what order). We illustrate how users may interact with the generated text and associated plan visualizations, e.g., by editing and modifying the plan in order to improve or control the generated output.A short video demonstrating our system is available at https://goo.gle/text-blueprint-demo

pdf
Conditional Generation with a Question-Answering Blueprint
Shashi Narayan | Joshua Maynez | Reinald Kim Amplayo | Kuzman Ganchev | Annie Louis | Fantine Huot | Anders Sandholm | Dipanjan Das | Mirella Lapata
Transactions of the Association for Computational Linguistics, Volume 11

The ability to convey relevant and faithful information is critical for many tasks in conditional generation and yet remains elusive for neural seq-to-seq models whose outputs often reveal hallucinations and fail to correctly cover important details. In this work, we advocate planning as a useful intermediate representation for rendering conditional generation less opaque and more grounded. We propose a new conceptualization of text plans as a sequence of question-answer (QA) pairs and enhance existing datasets (e.g., for summarization) with a QA blueprint operating as a proxy for content selection (i.e., what to say) and planning (i.e., in what order). We obtain blueprints automatically by exploiting state-of-the-art question generation technology and convert input-output pairs into input-blueprint-output tuples. We develop Transformer-based models, each varying in how they incorporate the blueprint in the generated output (e.g., as a global plan or iteratively). Evaluation across metrics and datasets demonstrates that blueprint models are more factual than alternatives which do not resort to planning and allow tighter control of the generation output.

pdf
QAmeleon: Multilingual QA with Only 5 Examples
Priyanka Agrawal | Chris Alberti | Fantine Huot | Joshua Maynez | Ji Ma | Sebastian Ruder | Kuzman Ganchev | Dipanjan Das | Mirella Lapata
Transactions of the Association for Computational Linguistics, Volume 11

The availability of large, high-quality datasets has been a major driver of recent progress in question answering (QA). Such annotated datasets, however, are difficult and costly to collect, and rarely exist in languages other than English, rendering QA technology inaccessible to underrepresented languages. An alternative to building large monolingual training datasets is to leverage pre-trained language models (PLMs) under a few-shot learning setting. Our approach, QAmeleon, uses a PLM to automatically generate multilingual data upon which QA models are fine-tuned, thus avoiding costly annotation. Prompt tuning the PLM with only five examples per language delivers accuracy superior to translation-based baselines; it bridges nearly 60% of the gap between an English-only baseline and a fully-supervised upper bound fine-tuned on almost 50,000 hand-labeled examples; and consistently leads to improvements compared to directly fine-tuning a QA model on labeled examples in low resource settings. Experiments on the TyDiqa-GoldP and MLQA benchmarks show that few-shot prompt tuning for data synthesis scales across languages and is a viable alternative to large-scale annotation.1

2018

pdf
State-of-the-art Chinese Word Segmentation with Bi-LSTMs
Ji Ma | Kuzman Ganchev | David Weiss
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

A wide variety of neural-network architectures have been proposed for the task of Chinese word segmentation. Surprisingly, we find that a bidirectional LSTM model, when combined with standard deep learning techniques and best practices, can achieve better accuracy on many of the popular datasets as compared to models based on more complex neuralnetwork architectures. Furthermore, our error analysis shows that out-of-vocabulary words remain challenging for neural-network models, and many of the remaining errors are unlikely to be fixed through architecture changes. Instead, more effort should be made on exploring resources for further improvement.

2016

pdf
Globally Normalized Transition-Based Neural Networks
Daniel Andor | Chris Alberti | David Weiss | Aliaksei Severyn | Alessandro Presta | Kuzman Ganchev | Slav Petrov | Michael Collins
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf
Semantic Role Labeling with Neural Network Factors
Nicholas FitzGerald | Oscar Täckström | Kuzman Ganchev | Dipanjan Das
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Efficient Inference and Structured Learning for Semantic Role Labeling
Oscar Täckström | Kuzman Ganchev | Dipanjan Das
Transactions of the Association for Computational Linguistics, Volume 3

We present a dynamic programming algorithm for efficient constrained inference in semantic role labeling. The algorithm tractably captures a majority of the structural constraints examined by prior work in this area, which has resorted to either approximate methods or off-the-shelf integer linear programming solvers. In addition, it allows training a globally-normalized log-linear model with respect to constrained conditional likelihood. We show that the dynamic program is several times faster than an off-the-shelf integer linear programming solver, while reaching the same solution. Furthermore, we show that our structured model results in significant improvements over its local counterpart, achieving state-of-the-art results on both PropBank- and FrameNet-annotated corpora.

2014

pdf
Semantic Frame Identification with Distributed Word Representations
Karl Moritz Hermann | Dipanjan Das | Jason Weston | Kuzman Ganchev
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2013

pdf
Cross-Lingual Discriminative Learning of Sequence Models with Posterior Regularization
Kuzman Ganchev | Dipanjan Das
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Universal Dependency Annotation for Multilingual Parsing
Ryan McDonald | Joakim Nivre | Yvonne Quirmbach-Brundage | Yoav Goldberg | Dipanjan Das | Kuzman Ganchev | Keith Hall | Slav Petrov | Hao Zhang | Oscar Täckström | Claudia Bedini | Núria Bertomeu Castelló | Jungmee Lee
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf
Using Search-Logs to Improve Query Tagging
Kuzman Ganchev | Keith Hall | Ryan McDonald | Slav Petrov
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2011

pdf
Rich Prior Knowledge in Learning for Natural Language Processing
Gregory Druck | Kuzman Ganchev | João Graça
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
Proceedings of the RANLP 2011 Workshop on Information Extraction and Knowledge Acquisition
Preslav Nakov | Zornitsa Kozareva | Kuzman Ganchev | Jerry Hobbs
Proceedings of the RANLP 2011 Workshop on Information Extraction and Knowledge Acquisition

2010

pdf
Learning Tractable Word Alignment Models with Complex Constraints
João V. Graça | Kuzman Ganchev | Ben Taskar
Computational Linguistics, Volume 36, Issue 3 - September 2010

pdf
Sparsity in Dependency Grammar Induction
Jennifer Gillenwater | Kuzman Ganchev | João Graça | Fernando Pereira | Ben Taskar
Proceedings of the ACL 2010 Conference Short Papers

2009

pdf
Dependency Grammar Induction via Bitext Projection Constraints
Kuzman Ganchev | Jennifer Gillenwater | Ben Taskar
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf
Edlin: an Easy to Read Linear Learning Framework
Kuzman Ganchev | Georgi Georgiev
Proceedings of the International Conference RANLP-2009

pdf
Feature-Rich Named Entity Recognition for Bulgarian Using Conditional Random Fields
Georgi Georgiev | Preslav Nakov | Kuzman Ganchev | Petya Osenova | Kiril Simov
Proceedings of the International Conference RANLP-2009

pdf
Tunable Domain-Independent Event Extraction in the MIRA Framework
Georgi Georgiev | Kuzman Ganchev | Vassil Momchev | Deyan Peychev | Preslav Nakov | Angus Roberts
Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task

pdf
A Joint Model for Normalizing Gene and Organism Mentions in Text
Georgi Georgiev | Preslav Nakov | Kuzman Ganchev | Deyan Peychev | Vassil Momchev
Proceedings of the Workshop on Biomedical Information Extraction

2008

pdf
Better Alignments = Better Translations?
Kuzman Ganchev | João V. Graça | Ben Taskar
Proceedings of ACL-08: HLT

pdf
Small Statistical Models by Random Feature Mixing
Kuzman Ganchev | Mark Dredze
Proceedings of the ACL-08: HLT Workshop on Mobile Language Processing

2007

pdf
Frustratingly Hard Domain Adaptation for Dependency Parsing
Mark Dredze | John Blitzer | Partha Pratim Talukdar | Kuzman Ganchev | João Graça | Fernando Pereira
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf
Transductive Structured Classification through Constrained Min-Cuts
Kuzman Ganchev | Fernando Pereira
Proceedings of the Second Workshop on TextGraphs: Graph-Based Algorithms for Natural Language Processing

pdf
Automatic Code Assignment to Medical Text
Koby Crammer | Mark Dredze | Kuzman Ganchev | Partha Pratim Talukdar | Steven Carroll
Biological, translational, and clinical language processing

pdf
Semi-Automated Named Entity Annotation
Kuzman Ganchev | Fernando Pereira | Mark Mandel | Steven Carroll | Peter White
Proceedings of the Linguistic Annotation Workshop