Workshop on Simple and Efficient Natural Language Processing (2023)
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
Nafise Sadat Moosavi
|
Iryna Gurevych
|
Yufang Hou
|
Gyuwan Kim
|
Young Jin Kim
|
Tal Schuster
|
Ameeta Agrawal
KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals
Sandeep Silwal
|
Sara Ahmadian
|
Andrew Nystrom
|
Andrew Mccallum
|
Deepak Ramachandran
|
Mehran Kazemi
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
Yanchen Liu
|
Timo Schick
|
Hinrich Schtze
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Daniel Campos
|
Alexandre Marques
|
Mark Kurtz
|
Cheng Xiang Zhai
Quick Dense Retrievers Consume KALE: Post Training KullbackLeibler Alignment of Embeddings for Asymmetrical dual encoders
Daniel Campos
|
Alessandro Magnani
|
Chengxiang Zhai
Lessons on Parameter Sharing across Layers in Transformers
Sho Takase
|
Shun Kiyono
To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Daniel Campos
|
Chengxiang Zhai
Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning
Dantong Liu
|
Kaushik Pavani
|
Sunny Dasgupta
ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models
Aditya Shah
|
Surendrabikram Thapa
|
Aneesh Jain
|
Lifu Huang
NLU on Data Diets: Dynamic Data Subset Selection for NLP Classification Tasks
Jean-michel Attendu
|
Jean-philippe Corbeil
On the Interactions of Structural Constraints and Data Resources for Structured Prediction
Zhisong Zhang
|
Emma Strubell
|
Eduard Hovy
Can we Pretrain a SotA Legal Language Model on a Budget From Scratch?
Joel Niklaus
|
Daniele Giofre
Is a Video worth n n Images? A Highly Efficient Approach to Transformer-based Video Question Answering
Chenyang Lyu
|
Tianbo Ji
|
Yvette Graham
|
Jennifer Foster
How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
Xin Xu
|
Yuqi Zhu
|
Xiaohan Wang
|
Ningyu Zhang
Prompting language models improves performance in imbalanced setting
Jay Mohta
KGQA Without Retraining
Nick Mckenna
|
Priyanka Sen
MANER: Mask Augmented Named Entity Recognition for Extreme Low-Resource Languages
Shashank Sonkar
|
Zichao Wang
|
Richard Baraniuk
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning
Peggy Tang
|
Junbin Gao
|
Lei Zhang
|
Zhiyong Wang
Exploring the Effect of Frequency Resolution in FNet
Gregory Szumel
|
Ghazal Khalighinejad
|
Rickard Stureborg
|
Sam Wiseman
Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory
Aliki Anagnostopoulou
|
Mareike Hartmann
|
Daniel Sonntag
Corpus Complexity Matters in Pretraining Language Models
Ameeta Agrawal
|
Suresh Singh
PersonaPKT: Building Personalized Dialogue Agents via Parameter-efficient Knowledge Transfer
Xu Han
|
Bin Guo
|
Yoon Jung
|
Benjamin Yao
|
Yu Zhang
|
Xiaohu Liu
|
Chenlei Guo
Small Character Models Match Large Word Models for Autocomplete Under Memory Constraints
Ganesh Jawahar
|
Subhabrata Mukherjee
|
Debadeepta Dey
|
Muhammad Abdul-mageed
|
Laks Lakshmanan, V.s.
|
Caio Mendes
|
Gustavo De Rosa
|
Shital Shah
Query Encoder Distillation via Embedding Alignment is a Strong Baseline Method to Boost Dense Retriever Online Efficiency
Yuxuan Wang
|
Lyu Hong
Minimalist Entity Disambiguation for Mid-Resource Languages
Benno Kruit