Aditya Kusupati
Fixing paper assignments
- Please select all papers that belong to the same person.
- Indicate below which author they should be assigned to.
TODO: "submit" and "cancel" buttons here
2023
SHARCS: Efficient Transformers Through Routing with Dynamic Width Sub-networks
Mohammadreza Salehi
|
Sachin Mehta
|
Aditya Kusupati
|
Ali Farhadi
|
Hannaneh Hajishirzi
Findings of the Association for Computational Linguistics: EMNLP 2023
We introduce SHARCS for adaptive inference that takes into account the hardness of input samples. SHARCS can train a router on any transformer network, enabling the model to direct different samples to sub-networks with varying widths. Our experiments demonstrate that: (1) SHARCS outperforms or complements existing per-sample adaptive inference methods across various classification tasks in terms of accuracy vs. FLOPs; (2) SHARCS generalizes across different architectures and can be even applied to compressed and efficient transformer encoders to further improve their efficiency; (3) SHARCS can provide a 2 times inference speed up at an insignificant drop in accuracy.