Discrete representations in neural models of spoken language
Bertrand Higy, Lieke Gelderloos, Afra Alishahi, Grzegorz Chrupała
Abstract
The distributed and continuous representations used by neural networks are at odds with representations employed in linguistics, which are typically symbolic. Vector quantization has been proposed as a way to induce discrete neural representations that are closer in nature to their linguistic counterparts. However, it is not clear which metrics are the best-suited to analyze such discrete representations. We compare the merits of four commonly used metrics in the context of weakly supervised models of spoken language. We compare the results they show when applied to two different models, while systematically studying the effect of the placement and size of the discretization layer. We find that different evaluation regimes can give inconsistent results. While we can attribute them to the properties of the different metrics in most cases, one point of concern remains: the use of minimal pairs of phoneme triples as stimuli disadvantages larger discrete unit inventories, unlike metrics applied to complete utterances. Furthermore, while in general vector quantization induces representations that correlate with units posited in linguistics, the strength of this correlation is only moderate.- Anthology ID:
- 2021.blackboxnlp-1.11
- Volume:
- Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Editors:
- Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
- Venue:
- BlackboxNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 163–176
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2021.blackboxnlp-1.11/
- DOI:
- 10.18653/v1/2021.blackboxnlp-1.11
- Cite (ACL):
- Bertrand Higy, Lieke Gelderloos, Afra Alishahi, and Grzegorz Chrupała. 2021. Discrete representations in neural models of spoken language. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 163–176, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Discrete representations in neural models of spoken language (Higy et al., BlackboxNLP 2021)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2021.blackboxnlp-1.11.pdf
- Code
- bhigy/discrete-repr