Learning How to Active Learn by Dreaming

Thuy-Trang Vu, Ming Liu, Dinh Phung, Gholamreza Haffari

[How to correct problems with metadata yourself]


Abstract
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. Recent data-driven AL policy learning methods are also restricted to learn from closely related domains. We introduce a new sample-efficient method that learns the AL policy directly on the target domain of interest by using wake and dream cycles. Our approach interleaves between querying the annotation of the selected datapoints to update the underlying student learner and improving AL policy using simulation where the current student learner acts as an imperfect annotator. We evaluate our method on cross-domain and cross-lingual text classification and named entity recognition tasks. Experimental results show that our dream-based AL policy training strategy is more effective than applying the pretrained policy without further fine-tuning and better than the existing strong baseline methods that use heuristics or reinforcement learning.
Anthology ID:
P19-1401
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4091–4101
Language:
URL:
https://aclanthology.org/P19-1401
DOI:
10.18653/v1/P19-1401
Bibkey:
Cite (ACL):
Thuy-Trang Vu, Ming Liu, Dinh Phung, and Gholamreza Haffari. 2019. Learning How to Active Learn by Dreaming. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4091–4101, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Learning How to Active Learn by Dreaming (Vu et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/P19-1401.pdf
Code
 trangvu/alil-dream