Vladislav Maraev


2023

pdf
Because is why: Children’s acquisition of topoi through why questions
Christine Howes | Ellen Breitholtz | Vladislav Maraev
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)

In this paper we look at how children learn the underlying principles of commonsense reasoning, sometimes referred to as topoi, which are prevalent in everyday dialogue. By examining the utterances of two children in the CHILDES corpus for whom there is extensive longitudinal data, we show how children can elicit topoi from their parents by asking why-questions. This strategy for the rapid acquisition of topoi peaks at around age three, suggesting that it is a critical step in becoming a fully competent language user.

2022

pdf
In Search of Meaning and Its Representations for Computational Linguistics
Simon Dobnik | Robin Cooper | Adam Ek | Bill Noble | Staffan Larsson | Nikolai Ilinykh | Vladislav Maraev | Vidya Somashekarappa
Proceedings of the 2022 CLASP Conference on (Dis)embodiment

In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.

2021

pdf
Can the Transformer Learn Nested Recursion with Symbol Masking?
Jean-Philippe Bernardy | Adam Ek | Vladislav Maraev
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Why Should I Turn Left? Towards Active Explainability for Spoken Dialogue Systems.
Vladislav Maraev | Ellen Breitholtz | Christine Howes | Jean-Philippe Bernardy
Proceedings of the Reasoning and Interaction Conference (ReInAct 2021)

In this paper we argue that to make dialogue systems able to actively explain their decisions they can make use of enthymematic reasoning. We motivate why this is an appropriate strategy and integrate it within our own proof-theoretic dialogue manager framework based on linear logic. In particular, this enables a dialogue system to provide reasonable answers to why-questions that query information previously given by the system.

pdf
Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning
Bill Noble | Vladislav Maraev
Proceedings of the 14th International Conference on Computational Semantics (IWCS)

We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific fine-tuning is essential for good performance.

2020

pdf
Dialogue management with linear logic: the role of metavariables in questions and clarifications
Vladislav Maraev | Jean-Philippe Bernardy | Jonathan Ginzburg
Traitement Automatique des Langues, Volume 61, Numéro 3 : Dialogue et systèmes de dialogue [Dialogue and dialogue systems]

2019

pdf bib
Proceedings of the 13th International Conference on Computational Semantics - Student Papers
Simon Dobnik | Stergios Chatzikyriakidis | Vera Demberg | Kathrein Abu Kwaik | Vladislav Maraev
Proceedings of the 13th International Conference on Computational Semantics - Student Papers

2017

pdf
Ways of Asking and Replying in Duplicate Question Detection
João António Rodrigues | Chakaveh Saedi | Vladislav Maraev | João Silva | António Branco
Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)

This paper presents the results of systematic experimentation on the impact in duplicate question detection of different types of questions across both a number of established approaches and a novel, superior one used to address this language processing task. This study permits to gain a novel insight on the different levels of robustness of the diverse detection methods with respect to different conditions of their application, including the ones that approximate real usage scenarios.