Niels van der Heijden


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2023

pdf bib
K-hop neighbourhood regularization for few-shot learning on graphs: A case study of text classification
Niels van der Heijden | Ekaterina Shutova | Helen Yannakoudakis
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

We present FewShotTextGCN, a novel method designed to effectively utilize the properties of word-document graphs for improved learning in low-resource settings. We introduce K-hop Neighbourhood Regularization, a regularizer for heterogeneous graphs, and show that it stabilizes and improves learning when only a few training samples are available. We furthermore propose a simplification in the graph-construction method, which results in a graph that is ∼7 times less dense and yields better performance in little-resource settings while performing on par with the state of the art in high-resource settings. Finally, we introduce a new variant of Adaptive Pseudo-Labeling tailored for word-document graphs. When using as little as 20 samples for training, we outperform a strong TextGCN baseline with 17% in absolute accuracy on average over eight languages. We demonstrate that our method can be applied to document classification without any language model pretraining on a wide range of typologically diverse languages while performing on par with large pretrained language models.

2021

pdf bib
Multilingual and cross-lingual document classification: A meta-learning approach
Niels van der Heijden | Helen Yannakoudakis | Pushkar Mishra | Ekaterina Shutova
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

The great majority of languages in the world are considered under-resourced for successful application of deep learning methods. In this work, we propose a meta-learning approach to document classification in low-resource languages and demonstrate its effectiveness in two different settings: few-shot, cross-lingual adaptation to previously unseen languages; and multilingual joint-training when limited target-language data is available during trai-ing. We conduct a systematic comparison of several meta-learning methods, investigate multiple settings in terms of data availability, and show that meta-learning thrives in settings with a heterogeneous task distribution. We propose a simple, yet effective adjustment to existing meta-learning methods which allows for better and more stable learning, and set a new state-of-the-art on a number of languages while performing on-par on others, using only a small amount of labeled data.