Rudolf Rosa


2022

pdf
GPT-2-based Human-in-the-loop Theatre Play Script Generation
Rudolf Rosa | Patrícia Schmidtová | Ondřej Dušek | Tomáš Musil | David Mareček | Saad Obaid | Marie Nováková | Klára Vosecká | Josef Doležal
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)

We experiment with adapting generative language models for the generation of long coherent narratives in the form of theatre plays. Since fully automatic generation of whole plays is not currently feasible, we created an interactive tool that allows a human user to steer the generation somewhat while minimizing intervention. We pursue two approaches to long-text generation: a flat generation with summarization of context, and a hierarchical text-to-text two-stage approach, where a synopsis is generated first and then used to condition generation of the final script. Our preliminary results and discussions with theatre professionals show improvements over vanilla language model generation, but also identify important limitations of our approach.

pdf
THEaiTRobot: An Interactive Tool for Generating Theatre Play Scripts
Rudolf Rosa | Patrícia Schmidtová | Alisa Zakhtarenko | Ondrej Dusek | Tomáš Musil | David Mareček | Saad Ul Islam | Marie Novakova | Klara Vosecka | Daniel Hrbek | David Kostak
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations

We present a free online demo of THEaiTRobot, an open-source bilingual tool for interactively generating theatre play scripts, in two versions. THEaiTRobot 1.0 uses the GPT-2 language model with minimal adjustments. THEaiTRobot 2.0 uses two models created by fine-tuning GPT-2 on purposefully collected and processed datasets and several other components, generating play scripts in a hierarchical fashion (title synopsis script). The underlying tool is used in the THEaiTRE project to generate scripts for plays, which are then performed on stage by a professional theatre.

pdf
TEAM UFAL @ CreativeSumm 2022: BART and SamSum based few-shot approach for creative Summarization
Rishu Kumar | Rudolf Rosa
Proceedings of The Workshop on Automatic Summarization for Creative Writing

This system description paper details TEAM UFAL’s approach for the SummScreen, TVMegasite subtask of the CreativeSumm shared task. The subtask deals with creating summaries for dialogues from TV Soap operas. We utilized BART based pre-trained model fine-tuned on SamSum dialouge summarization dataset. Few examples from AutoMin dataset and the dataset provided by the organizers were also inserted into the data as a few-shot learning objective. The additional data was manually broken into chunks based on different boundaries in summary and the dialogue file. For inference we choose a similar strategy as the top-performing team at AutoMin 2021, where the data is split into chunks, either on [SCENE_CHANGE] or exceeding a pre-defined token length, to accommodate the maximum token possible in the pre-trained model for one example. The final training strategy was chosen based on how natural the responses looked instead of how well the model performed on an automated evaluation metrics such as ROGUE.

2020

pdf
On the Language Neutrality of Pre-trained Multilingual Representations
Jindřich Libovický | Rudolf Rosa | Alexander Fraser
Findings of the Association for Computational Linguistics: EMNLP 2020

Multilingual contextual embeddings, such as multilingual BERT and XLM-RoBERTa, have proved useful for many multi-lingual tasks. Previous work probed the cross-linguality of the representations indirectly using zero-shot transfer learning on morphological and syntactic tasks. We instead investigate the language-neutrality of multilingual contextual embeddings directly and with respect to lexical semantics. Our results show that contextual embeddings are more language-neutral and, in general, more informative than aligned static word-type embeddings, which are explicitly trained for language neutrality. Contextual embeddings are still only moderately language-neutral by default, so we propose two simple methods for achieving stronger language neutrality: first, by unsupervised centering of the representation for each language and second, by fitting an explicit projection on small parallel data. Besides, we show how to reach state-of-the-art accuracy on language identification and match the performance of statistical methods for word alignment of parallel sentences without using parallel data.

pdf
Universal Dependencies According to BERT: Both More Specific and More General
Tomasz Limisiewicz | David Mareček | Rudolf Rosa
Findings of the Association for Computational Linguistics: EMNLP 2020

This work focuses on analyzing the form and extent of syntactic abstraction captured by BERT by extracting labeled dependency trees from self-attentions. Previous work showed that individual BERT heads tend to encode particular dependency relation types. We extend these findings by explicitly comparing BERT relations to Universal Dependencies (UD) annotations, showing that they often do not match one-to-one. We suggest a method for relation identification and syntactic tree construction. Our approach produces significantly more consistent dependency trees than previous work, showing that it better explains the syntactic abstractions in BERT. At the same time, it can be successfully applied with only a minimal amount of supervision and generalizes well across languages.

pdf
Predicting Typological Features in WALS using Language Embeddings and Conditional Probabilities: ÚFAL Submission to the SIGTYP 2020 Shared Task
Martin Vastl | Daniel Zeman | Rudolf Rosa
Proceedings of the Second Workshop on Computational Research in Linguistic Typology

We present our submission to the SIGTYP 2020 Shared Task on the prediction of typological features. We submit a constrained system, predicting typological features only based on the WALS database. We investigate two approaches. The simpler of the two is a system based on estimating correlation of feature values within languages by computing conditional probabilities and mutual information. The second approach is to train a neural predictor operating on precomputed language embeddings based on WALS features. Our submitted system combines the two approaches based on their self-estimated confidence scores. We reach the accuracy of 70.7% on the test data and rank first in the shared task.

pdf bib
Eyes on the Parse: Using Gaze Features in Syntactic Parsing
Abhishek Agrawal | Rudolf Rosa
Proceedings of the Second Workshop on Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)

In this paper, we explore the potential benefits of leveraging eye-tracking information for dependency parsing on the English part of the Dundee corpus. To achieve this, we cast dependency parsing as a sequence labelling task and then augment the neural model for sequence labelling with eye-tracking features. We also augment a graph-based parser with eye-tracking features and parse the Dundee Corpus to corroborate our findings from the sequence labelling parser. We then experiment with a variety of parser setups ranging from parsing with all features to a delexicalized parser. Our experiments show that for a parser with all features, although the improvements are positive for the LAS score they are not significant whereas our delexicalized parser significantly outperforms the baseline we established. We also analyze the contribution of various eye-tracking features towards the different parser setups and find that eye-tracking features contain information which is complementary in nature, thus implying that augmenting the parser with various gaze features grouped together provides better performance than any individual gaze feature.

2019

pdf
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions
David Mareček | Rudolf Rosa
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

We inspect the multi-head self-attention in Transformer NMT encoders for three source languages, looking for patterns that could have a syntactic interpretation. In many of the attention heads, we frequently find sequences of consecutive states attending to the same position, which resemble syntactic phrases. We propose a transparent deterministic method of quantifying the amount of syntactic information present in the self-attentions, based on automatically building and evaluating phrase-structure trees from the phrase-like sequences. We compare the resulting trees to existing constituency treebanks, both manually and by computing precision and recall.

pdf
Attempting to separate inflection and derivation using vector space representations
Rudolf Rosa | Zdeněk Žabokrtský
Proceedings of the Second International Workshop on Resources and Tools for Derivational Morphology

2018

pdf
CUNI x-ling: Parsing Under-Resourced Languages in CoNLL 2018 UD Shared Task
Rudolf Rosa | David Mareček
Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies

This is a system description paper for the CUNI x-ling submission to the CoNLL 2018 UD Shared Task. We focused on parsing under-resourced languages, with no or little training data available. We employed a wide range of approaches, including simple word-based treebank translation, combination of delexicalized parsers, and exploitation of available morphological dictionaries, with a dedicated setup tailored to each of the languages. In the official evaluation, our submission was identified as the clear winner of the Low-resource languages category.

pdf
Extracting Syntactic Trees from Transformer Encoder Self-Attentions
David Mareček | Rudolf Rosa
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

This is a work in progress about extracting the sentence tree structures from the encoder’s self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.

2017

pdf
Slavic Forest, Norwegian Wood
Rudolf Rosa | Daniel Zeman | David Mareček | Zdeněk Žabokrtský
Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)

We once had a corp, or should we say, it once had us They showed us its tags, isn’t it great, unified tags They asked us to parse and they told us to use everything So we looked around and we noticed there was near nothing We took other langs, bitext aligned: words one-to-one We played for two weeks, and then they said, here is the test The parser kept training till morning, just until deadline So we had to wait and hope what we get would be just fine And, when we awoke, the results were done, we saw we’d won So, we wrote this paper, isn’t it good, Norwegian wood.

pdf
Findings of the WMT 2017 Biomedical Translation Shared Task
Antonio Jimeno Yepes | Aurélie Névéol | Mariana Neves | Karin Verspoor | Ondřej Bojar | Arthur Boyer | Cristian Grozea | Barry Haddow | Madeleine Kittner | Yvonne Lichtblau | Pavel Pecina | Roland Roller | Rudolf Rosa | Amy Siu | Philippe Thomas | Saskia Trescher
Proceedings of the Second Conference on Machine Translation

pdf
CUNI Experiments for WMT17 Metrics Task
David Mareček | Ondřej Bojar | Ondřej Hübsch | Rudolf Rosa | Dušan Variš
Proceedings of the Second Conference on Machine Translation

pdf
Error Analysis of Cross-lingual Tagging and Parsing
Rudolf Rosa | Zdeněk Žabokrtský
Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories

2016

pdf bib
TectoMT – a deep linguistic core of the combined Cimera MT system
Martin Popel | Roman Sudarikov | Ondřej Bojar | Rudolf Rosa | Jan Hajič
Proceedings of the 19th Annual Conference of the European Association for Machine Translation: Projects/Products

pdf
Dictionary-based Domain Adaptation of MT Systems without Retraining
Rudolf Rosa | Roman Sudarikov | Michal Novák | Martin Popel | Ondřej Bojar
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers

pdf bib
Moses & Treex Hybrid MT Systems Bestiary
Rudolf Rosa | Martin Popel | Ondřej Bojar | David Mareček | Ondřej Dušek
Proceedings of the 2nd Deep Machine Translation Workshop

2015

pdf
KLcpos3 - a Language Similarity Measure for Delexicalized Parser Transfer
Rudolf Rosa | Zdeněk Žabokrtský
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf
Targeted Paraphrasing on Deep Syntactic Layer for MT Evaluation
Petra Barančíková | Rudolf Rosa
Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015)

pdf
Multi-source Cross-lingual Delexicalized Parser Transfer: Prague or Stanford?
Rudolf Rosa
Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015)

pdf
MSTParser Model Interpolation for Multi-Source Delexicalized Transfer
Rudolf Rosa | Zdeněk Žabokrtský
Proceedings of the 14th International Conference on Parsing Technologies

pdf
New Language Pairs in TectoMT
Ondřej Dušek | Luís Gomes | Michal Novák | Martin Popel | Rudolf Rosa
Proceedings of the Tenth Workshop on Statistical Machine Translation

pdf
Translation Model Interpolation for Domain Adaptation in TectoMT
Rudolf Rosa | Ondřej Dušek | Michal Novák | Martin Popel
Proceedings of the 1st Deep Machine Translation Workshop

2014

pdf
CUNI in WMT14: Chimera Still Awaits Bellerophon
Aleš Tamchyna | Martin Popel | Rudolf Rosa | Ondřej Bojar
Proceedings of the Ninth Workshop on Statistical Machine Translation

pdf
Machine Translation of Medical Texts in the Khresmoi Project
Ondřej Dušek | Jan Hajič | Jaroslava Hlaváčová | Michal Novák | Pavel Pecina | Rudolf Rosa | Aleš Tamchyna | Zdeňka Urešová | Daniel Zeman
Proceedings of the Ninth Workshop on Statistical Machine Translation

pdf
HamleDT 2.0: Thirty Dependency Treebanks Stanfordized
Rudolf Rosa | Jan Mašek | David Mareček | Martin Popel | Daniel Zeman | Zdeněk Žabokrtský
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We present HamleDT 2.0 (HArmonized Multi-LanguagE Dependency Treebank). HamleDT 2.0 is a collection of 30 existing treebanks harmonized into a common annotation style, the Prague Dependencies, and further transformed into Stanford Dependencies, a treebank annotation style that became popular in recent years. We use the newest basic Universal Stanford Dependencies, without added language-specific subtypes. We describe both of the annotation styles, including adjustments that were necessary to make, and provide details about the conversion process. We also discuss the differences between the two styles, evaluating their advantages and disadvantages, and note the effects of the differences on the conversion. We regard the stanfordization as generally successful, although we admit several shortcomings, especially in the distinction between direct and indirect objects, that have to be addressed in future. We release part of HamleDT 2.0 freely; we are not allowed to redistribute the whole dataset, but we do provide the conversion pipeline.

pdf
Improving Evaluation of English-Czech MT through Paraphrasing
Petra Barančíková | Rudolf Rosa | Aleš Tamchyna
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper, we present a method of improving the accuracy of machine translation evaluation of Czech sentences. Given a reference sentence, our algorithm transforms it by targeted paraphrasing into a new synthetic reference sentence that is closer in wording to the machine translation output, but at the same time preserves the meaning of the original reference sentence. Grammatical correctness of the new reference sentence is provided by applying Depfix on newly created paraphrases. Depfix is a system for post-editing English-to-Czech machine translation outputs. We adjusted it to fix the errors in paraphrased sentences. Due to a noisy source of our paraphrases, we experiment with adding word alignment. However, the alignment reduces the number of paraphrases found and the best results were achieved by a simple greedy method with only one-word paraphrases thanks to their intensive filtering. BLEU scores computed using these new reference sentences show significantly higher correlation with human judgment than scores computed on the original reference sentences.

2013

pdf
Chimera – Three Heads for English-to-Czech Translation
Ondřej Bojar | Rudolf Rosa | Aleš Tamchyna
Proceedings of the Eighth Workshop on Statistical Machine Translation

pdf
Deepfix: Statistical Post-editing of Statistical Machine Translation Using Deep Syntactic Analysis
Rudolf Rosa | David Mareček | Aleš Tamchyna
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop

2012

pdf
DEPFIX: A System for Automatic Correction of Czech MT Outputs
Rudolf Rosa | David Mareček | Ondřej Dušek
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf
Using Parallel Features in Parsing of Machine-Translated Sentences for Correction of Grammatical Errors
Rudolf Rosa | Ondřej Dušek | David Mareček | Martin Popel
Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation

2011

pdf
Two-step translation with grammatical post-processing
David Mareček | Rudolf Rosa | Petra Galuščáková | Ondřej Bojar
Proceedings of the Sixth Workshop on Statistical Machine Translation