Yasemin Altun


2022

pdf
LAD: Language Models as Data for Zero-Shot Dialog
Shikib Mehri | Yasemin Altun | Maxine Eskenazi
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

To facilitate zero-shot generalization in task-oriented dialog, this paper proposes Language Models as Data (LAD). LAD is a paradigm for creating diverse and accurate synthetic data which conveys the necessary structural constraints and can be used to train a downstream neural dialog model. LAD leverages GPT-3 to induce linguistic diversity. LAD achieves significant performance gains in zero-shot settings on intent prediction (+15%), slot filling (+31.4 F-1) and next action prediction (+10 F-1). Furthermore, an interactive human evaluation shows that training with LAD is competitive with training on human dialogs.

2021

pdf
Translate & Fill: Improving Zero-Shot Multilingual Semantic Parsing with Synthetic Data
Massimo Nicosia | Zhongdi Qu | Yasemin Altun
Findings of the Association for Computational Linguistics: EMNLP 2021

While multilingual pretrained language models (LMs) fine-tuned on a single language have shown substantial cross-lingual task transfer capabilities, there is still a wide performance gap in semantic parsing tasks when target language supervision is available. In this paper, we propose a novel Translate-and-Fill (TaF) method to produce silver training data for a multilingual semantic parser. This method simplifies the popular Translate-Align-Project (TAP) pipeline and consists of a sequence-to-sequence filler model that constructs a full parse conditioned on an utterance and a view of the same parse. Our filler is trained on English data only but can accurately complete instances in other languages (i.e., translations of the English training utterances), in a zero-shot fashion. Experimental results on three multilingual semantic parsing datasets show that data augmentation with TaF reaches accuracies competitive with similar systems which rely on traditional alignment techniques.

2019

pdf
Answering Conversational Questions on Structured Data without Logical Forms
Thomas Mueller | Francesco Piccinno | Peter Shaw | Massimo Nicosia | Yasemin Altun
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation. We encode tables as graphs using a graph neural network model based on the Transformer architecture. The answers are then selected from the encoded graph using a pointer network. This model is appropriate for processing conversations around structured data, where the attention mechanism that selects the answers to a question can also be used to resolve conversational references. We demonstrate the validity of this approach with competitive results on the Sequential Question Answering (SQA) task.

pdf
Generating Logical Forms from Graph Representations of Text and Entities
Peter Shaw | Philip Massey | Angelica Chen | Francesco Piccinno | Yasemin Altun
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Structured information about entities is critical for many semantic parsing tasks. We present an approach that uses a Graph Neural Network (GNN) architecture to incorporate information about relevant entities and their relations during parsing. Combined with a decoder copy mechanism, this approach provides a conceptually simple mechanism to generate logical forms with entities. We demonstrate that this approach is competitive with the state-of-the-art across several tasks without pre-training, and outperforms existing approaches when combined with BERT pre-training.

2013

pdf
Overcoming the Lack of Parallel Data in Sentence Compression
Katja Filippova | Yasemin Altun
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

2007

pdf
Semi-Markov Models for Sequence Segmentation
Qinfeng Shi | Yasemin Altun | Alex Smola | S.V.N. Vishwanathan
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf
Broad-Coverage Sense Disambiguation and Information Extraction with a Supersense Sequence Tagger
Massimiliano Ciaramita | Yasemin Altun
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

2004

pdf
Using Conditional Random Fields to Predict Pitch Accents in Conversational Speech
Michelle Gregory | Yasemin Altun
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

2003

pdf
Investigating Loss Functions and Optimization Methods for Discriminative Learning of Label Sequences
Yasemin Altun | Mark Johnson | Thomas Hofmann
Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing

2000

pdf
Reading Comprehension Programs in a Statistical-Language-Processing Class
Eugene Charniak | Yasemin Altun | Rodrigo de Salvo Braz | Benjamin Garrett | Margaret Kosmala | Tomer Moscovich | Lixin Pang | Changhee Pyo | Ye Sun | Wei Wy | Zhongfa Yang | Shawn Zeiler | Lisa Zorn
ANLP-NAACL 2000 Workshop: Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems