Benjamin Odry
2022
Explaining the Effectiveness of Multi-Task Learning for Efficient Knowledge Extraction from Spine MRI Reports
Arijit Sehanobish
|
McCullen Sandora
|
Nabila Abraham
|
Jayashri Pawar
|
Danielle Torres
|
Anasuya Das
|
Murray Becker
|
Richard Herzog
|
Benjamin Odry
|
Ron Vianu
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Pretrained Transformer based models finetuned on domain specific corpora have changed the landscape of NLP. However, training or fine-tuning these models for individual tasks can be time consuming and resource intensive. Thus, a lot of current research is focused on using transformers for multi-task learning (Raffel et al., 2020) and how to group the tasks to help a multi-task model to learn effective representations that can be shared across tasks (Standley et al., 2020; Fifty et al., 2021) . In this work, we show that a single multi-tasking model can match the performance of task specific model when the task specific models show similar representations across all of their hidden layers and their gradients are aligned, i.e. their gradients follow the same direction. We hypothesize that the above observations explain the effectiveness of multi-task learning. We validate our observations on our internal radiologist-annotated datasets on the cervical and lumbar spine. Our method is simple and intuitive, and can be used in a wide range of NLP problems.
Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks
Arijit Sehanobish
|
Kawshik Kannan
|
Nabila Abraham
|
Anasuya Das
|
Benjamin Odry
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP). However, fine tuning such models still requires a large number of training examples for each target task, thus annotating multiple datasets and training these models on various downstream tasks becomes time consuming and expensive. In this work, we propose a simple extension of the Prototypical Networks for few-shot text classification. Our main idea is to replace the class prototypes by Gaussians and introduce a regularization term that encourages the examples to be clustered near the appropriate class centroids. Experimental results show that our method outperforms various strong baselines on 13 public and 4 internal datasets. Furthermore, we use the class distributions as a tool for detecting potential out-of-distribution (OOD) data points during deployment.
Search
Co-authors
- Arijit Sehanobish 2
- Nabila Abraham 2
- Anasuya Das 2
- McCullen Sandora 1
- Jayashri Pawar 1
- show all...