Pranav Rai
2022
Federated Continual Learning for Text Classification via Selective Inter-client Transfer
Yatin Chaudhary
|
Pranav Rai
|
Matthias Schubert
|
Hinrich Schütze
|
Pankaj Gupta
Findings of the Association for Computational Linguistics: EMNLP 2022
In this work, we combine the two paradigms: Federated Learning (FL) and Continual Learning (CL) for text classification task in cloud-edge continuum. The objective of Federated Continual Learning (FCL) is to improve deep learning models over life time at each client by (relevant and efficient) knowledge transfer without sharing data. Here, we address challenges in minimizing inter-client interference while knowledge sharing due to heterogeneous tasks across clients in FCL setup. In doing so, we propose a novel framework, Federated Selective Inter-client Transfer (FedSeIT) which selectively combines model parameters of foreign clients. To further maximize knowledge transfer, we assess domain overlap and select informative tasks from the sequence of historical tasks at each foreign client while preserving privacy. Evaluating against the baselines, we show improved performance, a gain of (average) 12.4% in text classification over a sequence of tasks using five datasets from diverse domains. To the best of our knowledge, this is the first work that applies FCL to NLP.
2018
Cross-topic Argument Mining from Heterogeneous Sources
Christian Stab
|
Tristan Miller
|
Benjamin Schiller
|
Pranav Rai
|
Iryna Gurevych
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Argument mining is a core technology for automating argument search in large document collections. Despite its usefulness for this task, most current approaches are designed for use only with specific text types and fall short when applied to heterogeneous texts. In this paper, we propose a new sentential annotation scheme that is reliably applicable by crowd workers to arbitrary Web texts. We source annotations for over 25,000 instances covering eight controversial topics. We show that integrating topic information into bidirectional long short-term memory networks outperforms vanilla BiLSTMs by more than 3 percentage points in F1 in two- and three-label cross-topic settings. We also show that these results can be further improved by leveraging additional data for topic relevance using multi-task learning.
2016
A New Feature Selection Technique Combined with ELM Feature Space for Text Classification
Rajendra Kumar Roul
|
Pranav Rai
Proceedings of the 13th International Conference on Natural Language Processing
Search
Co-authors
- Rajendra Kumar Roul 1
- Yatin Chaudhary 1
- Matthias Schubert 1
- Hinrich Schütze 1
- Pankaj Gupta 1
- show all...