Abstract
The language acquisition literature shows that children do not build their lexicon by segmenting the spoken input into phonemes and then building up words from them, but rather adopt a top-down approach and start by segmenting word-like units and then break them down into smaller units. This suggests that the ideal way of learning a language is by starting from full semantic units. In this paper, we investigate if this is also the case for a neural model of Visually Grounded Speech trained on a speech-image retrieval task. We evaluated how well such a network is able to learn a reliable speech-to-image mapping when provided with phone, syllable, or word boundary information. We present a simple way to introduce such information into an RNN-based model and investigate which type of boundary is the most efficient. We also explore at which level of the network’s architecture such information should be introduced so as to maximise its performances. Finally, we show that using multiple boundary types at once in a hierarchical structure, by which low-level segments are used to recompose high-level segments, is beneficial and yields better results than using low-level or high-level segments in isolation.- Anthology ID:
- 2020.conll-1.22
- Volume:
- Proceedings of the 24th Conference on Computational Natural Language Learning
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 291–301
- Language:
- URL:
- https://aclanthology.org/2020.conll-1.22
- DOI:
- 10.18653/v1/2020.conll-1.22
- Cite (ACL):
- William Havard, Laurent Besacier, and Jean-Pierre Chevrot. 2020. Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 291–301, Online. Association for Computational Linguistics.
- Cite (Informal):
- Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech (Havard et al., CoNLL 2020)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2020.conll-1.22.pdf
- Data
- COCO