Benjamin Bolte
2021
On the Influence of Masking Policies in Intermediate Pre-training
Qinyuan Ye
|
Belinda Z. Li
|
Sinong Wang
|
Benjamin Bolte
|
Hao Ma
|
Wen-tau Yih
|
Xiang Ren
|
Madian Khabsa
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Current NLP models are predominantly trained through a two-stage “pre-train then fine-tune” pipeline. Prior work has shown that inserting an intermediate pre-training stage, using heuristic masking policies for masked language modeling (MLM), can significantly improve final performance. However, it is still unclear (1) in what cases such intermediate pre-training is helpful, (2) whether hand-crafted heuristic objectives are optimal for a given task, and (3) whether a masking policy designed for one task is generalizable beyond that task. In this paper, we perform a large-scale empirical study to investigate the effect of various masking policies in intermediate pre-training with nine selected tasks across three categories. Crucially, we introduce methods to automate the discovery of optimal masking policies via direct supervision or meta-learning. We conclude that the success of intermediate pre-training is dependent on appropriate pre-train corpus, selection of output format (i.e., masked spans or full sentence), and clear understanding of the role that MLM plays for the downstream task. In addition, we find our learned masking policies outperform the heuristic of masking named entities on TriviaQA, and policies learned from one task can positively transfer to other tasks in certain cases, inviting future research in this direction.
On Generative Spoken Language Modeling from Raw Audio
Kushal Lakhotia
|
Eugene Kharitonov
|
Wei-Ning Hsu
|
Yossi Adi
|
Adam Polyak
|
Benjamin Bolte
|
Tu-Anh Nguyen
|
Jade Copet
|
Alexei Baevski
|
Abdelrahman Mohamed
|
Emmanuel Dupoux
Transactions of the Association for Computational Linguistics, Volume 9
We introduce Generative Spoken Language Modeling, the task of learning the acoustic and linguistic characteristics of a language from raw audio (no text, no labels), and a set of metrics to automatically evaluate the learned representations at acoustic and linguistic levels for both encoding and generation. We set up baseline systems consisting of a discrete speech encoder (returning pseudo-text units), a generative language model (trained on pseudo- text), and a speech decoder (generating a waveform from pseudo-text) all trained without supervision and validate the proposed metrics with human evaluation. Across 3 speech encoders (CPC, wav2vec 2.0, HuBERT), we find that the number of discrete units (50, 100, or 200) matters in a task-dependent and encoder- dependent way, and that some combinations approach text-based systems.1
Search
Co-authors
- Qinyuan Ye 1
- Belinda Z. Li 1
- Sinong Wang 1
- Hao Ma 1
- Wen-tau Yih 1
- show all...