Aleksandr Chuklin


2022

pdf
Text Generation with Text-Editing Models
Eric Malmi | Yue Dong | Jonathan Mallinson | Aleksandr Chuklin | Jakub Adamek | Daniil Mirylenka | Felix Stahlberg | Sebastian Krause | Shankar Kumar | Aliaksei Severyn
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts

Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, text simplification, and style transfer. These tasks share a common trait – they exhibit a large amount of textual overlap between the source and target texts. Text-editing models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs word-by-word from scratch thus making them slow at inference time. Text-editing models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This tutorial provides a comprehensive overview of the text-edit based models and current state-of-the-art approaches analyzing their pros and cons. We discuss challenges related to deployment and how these models help to mitigate hallucination and bias, both pressing challenges in the field of text generation.

2021

pdf
Building and Evaluating Open-Domain Dialogue Corpora with Clarifying Questions
Mohammad Aliannejadi | Julia Kiseleva | Aleksandr Chuklin | Jeff Dalton | Mikhail Burtsev
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Enabling open-domain dialogue systems to ask clarifying questions when appropriate is an important direction for improving the quality of the system response. Namely, for cases when a user request is not specific enough for a conversation system to provide an answer right away, it is desirable to ask a clarifying question to increase the chances of retrieving a satisfying answer. To address the problem of ‘asking clarifying questions in open-domain dialogues’: (1) we collect and release a new dataset focused on open-domain single- and multi-turn conversations, (2) we benchmark several state-of-the-art neural baselines, and (3) we propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues. These contributions are suitable as a foundation for further research.

2020

pdf
Proceedings of the 5th International Workshop on Search-Oriented Conversational AI (SCAI)
Jeff Dalton | Aleksandr Chuklin | Julia Kiseleva | Mikhail Burtsev
Proceedings of the 5th International Workshop on Search-Oriented Conversational AI (SCAI)

2018

pdf
Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI
Aleksandr Chuklin | Jeff Dalton | Julia Kiseleva | Alexey Borisov | Mikhail Burtsev
Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI