Branislav Bédi


2022

pdf bib
Reading Assistance through LARA, the Learning And Reading Assistant
Elham Akhlaghi | Ingibjörg Iða Auðunardóttir | Branislav Bédi | Hakeem Beedar | Harald Berthelsen | Cathy Chua | Catia Cucchiarini | Brynjarr Eyjólfsson | Nedelina Ivanova | Christèle Maizonniaux | Neasa Ní Chiaráin | Manny Rayner | John Sloan | Sigurður Vigfússon | Ghil’ad Zuckermann
Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference

We present an overview of LARA, the Learning And Reading Assistant, an open source platform for easy creation and use of multimedia annotated texts designed to support the improvement of reading skills. The paper is divided into three parts. In the first, we give a brief summary of LARA’s processing. In the second, we describe some generic functionality specially relevant for reading assistance: support for phonetically annotated texts, support for image-based texts, and integrated production of text-to-speech (TTS) generated audio. In the third, we outline some of the larger projects so far carried out with LARA, involving development of content for learning second and foreign (L2) languages such as Icelandic, Farsi, Irish, Old Norse and the Australian Aboriginal language Barngarla, where the issues involved overlap with those that arise when trying to help students improve first-language (L1) reading skills. All software and almost all content is freely available.

pdf
Using the LARA Little Prince to compare human and TTS audio quality
Elham Akhlaghi | Ingibjörg Iða Auðunardóttir | Anna Bączkowska | Branislav Bédi | Hakeem Beedar | Harald Berthelsen | Cathy Chua | Catia Cucchiarin | Hanieh Habibi | Ivana Horváthová | Junta Ikeda | Christèle Maizonniaux | Neasa Ní Chiaráin | Chadi Raheb | Manny Rayner | John Sloan | Nikos Tsourakis | Chunlin Yao
Proceedings of the Thirteenth Language Resources and Evaluation Conference

A popular idea in Computer Assisted Language Learning (CALL) is to use multimodal annotated texts, with annotations typically including embedded audio and translations, to support L2 learning through reading. An important question is how to create good quality audio, which can be done either through human recording or by a Text-To-Speech (TTS) engine. We may reasonably expect TTS to be quicker and easier, but human to be of higher quality. Here, we report a study using the open source LARA platform and ten languages. Samples of audio totalling about five minutes, representing the same four passages taken from LARA versions of Saint-Exupèry’s “Le petit prince”, were provided for each language in both human and TTS form; the passages were chosen to instantiate the 2x2 cross product of the conditions dialogue, not-dialogue and humour, not-humour. 251 subjects used a web form to compare human and TTS versions of each item and rate the voices as a whole. For the three languages where TTS did best, English, French and Irish, the evidence from this study and the previous one it extended suggest that TTS audio is now pedagogically adequate and roughly comparable with a non-professional human voice in terms of exemplifying correct pronunciation and prosody. It was however still judged substantially less natural and less pleasant to listen to. No clear evidence was found to support the hypothesis that dialogue and humour pose special problems for TTS. All data and software will be made freely available.

pdf
Using LARA to create image-based and phonetically annotated multimodal texts for endangered languages
Branislav Bédi | Hakeem Beedar | Belinda Chiera | Nedelina Ivanova | Christèle Maizonniaux | Neasa Ní Chiaráin | Manny Rayner | John Sloan | Ghil’ad Zuckermann
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages

We describe recent extensions to the open source Learning And Reading Assistant (LARA) supporting image-based and phonetically annotated texts. We motivate the utility of these extensions both in general and specifically in relation to endangered and archaic languages, and illustrate with examples from the revived Australian language Barngarla, Icelandic Sign Language, Irish Gaelic, Old Norse manuscripts and Egyptian hieroglyphics.

2021

pdf
LARA in the Service of Revivalistics and Documentary Linguistics: Community Engagement and Endangered Languages
Ghil’Ad Zuckermann | Sigurður Vigfússon | Manny Rayner | Neasa Ní Chiaráin | Nedelina Ivanova | Hanieh Habibi | Branislav Bédi
Proceedings of the 4th Workshop on the Use of Computational Methods in the Study of Endangered Languages Volume 1 (Papers)

2020

pdf
Constructing Multimodal Language Learner Texts Using LARA: Experiences with Nine Languages
Elham Akhlaghi | Branislav Bédi | Fatih Bektaş | Harald Berthelsen | Matthias Butterweck | Cathy Chua | Catia Cucchiarin | Gülşen Eryiğit | Johanna Gerlach | Hanieh Habibi | Neasa Ní Chiaráin | Manny Rayner | Steinþór Steingrímsson | Helmer Strik
Proceedings of the Twelfth Language Resources and Evaluation Conference

LARA (Learning and Reading Assistant) is an open source platform whose purpose is to support easy conversion of plain texts into multimodal online versions suitable for use by language learners. This involves semi-automatically tagging the text, adding other annotations and recording audio. The platform is suitable for creating texts in multiple languages via crowdsourcing techniques that can be used for teaching a language via reading and listening. We present results of initial experiments by various collaborators where we measure the time required to produce substantial LARA resources, up to the length of short novels, in Dutch, English, Farsi, French, German, Icelandic, Irish, Swedish and Turkish. The first results are encouraging. Although there are some startup problems, the conversion task seems manageable for the languages tested so far. The resulting enriched texts are posted online and are freely available in both source and compiled form.