Preeti Rao
2025
Predicting Prosodic Boundaries for Children’s Texts
Mansi Dhamne
|
Sneha Raman
|
Preeti Rao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Reading fluency in any language requires accurate word decoding but also natural prosodic phrasing i.e the grouping of words into rhythmically and syntactically coherent units. This holds for, both, reading aloud and silent reading. While adults pause meaningfully at clause or punctuation boundaries, children aged 8-13 often insert inappropriate pauses due to limited breath control and underdeveloped prosodic awareness. We present a text-based model to predict cognitively appropriate pause locations in children’s reading material. Using a curated dataset of 54 leveled English stories annotated for potential pauses, or prosodic boundaries, by 21 fluent speakers, we find that nearly 30% of pauses occur at non-punctuation locations of the text, highlighting the limitations of using only punctuation-based cues. Our model combines lexical, syntactic, and contextual features with a novel breath duration feature that captures syllable load since the last major boundary. This cognitively motivated approach can model both allowed and “forbidden” pauses. The proposed framework supports applications such as child-directed TTS and oral reading fluency assessment where the proper grouping of words is considered critical to reading comprehension.
2024
STORiCo: Storytelling TTS for Hindi with Character Voice Modulation
Pavan Tankala
|
Preethi Jyothi
|
Preeti Rao
|
Pushpak Bhattacharyya
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
We present a new Hindi text-to-speech (TTS) dataset and demonstrate its utility for the expressive synthesis of children’s audio stories. The dataset comprises narration by a single female speaker who modifies her voice to produce different story characters. Annotation for dialogue identification, character labelling, and character attribution are provided, all of which are expected to facilitate the learning of character voice and speaking styles. Experiments are conducted using different versions of the annotated dataset that enable training a multi-speaker TTS model on the single-speaker data. Subjective tests show that the multi-speaker model improves expressiveness and character voice consistency compared to the baseline single-speaker TTS. With the multi-speaker model, objective evaluations show comparable word error rates, better speaker voice consistency, and higher correlations with ground-truth emotion attributes. We release a new 16.8 hours storytelling speech dataset in Hindi and propose effective solutions for expressive TTS with narrator voice modulation and character voice consistency.
2012
Automatic pronunciation assessment for language learners with acoustic-phonetic features
Vaishali Patil
|
Preeti Rao
Proceedings of the Workshop on Speech and Language Processing Tools in Education
Search
Fix author
Co-authors
- Pushpak Bhattacharyya 1
- Mansi Dhamne 1
- Preethi Jyothi 1
- Vaishali Patil 1
- Sneha Raman 1
- show all...