Learning How to Actively Learn: A Deep Imitation Learning Approach

Ming Liu, Wray Buntine, Gholamreza Haffari


Abstract
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL “policy” using “imitation learning” (IL). Our IL-based approach makes use of an efficient and effective “algorithmic expert”, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
Anthology ID:
P18-1174
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1874–1883
Language:
URL:
https://aclanthology.org/P18-1174
DOI:
10.18653/v1/P18-1174
Bibkey:
Cite (ACL):
Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning How to Actively Learn: A Deep Imitation Learning Approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874–1883, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Learning How to Actively Learn: A Deep Imitation Learning Approach (Liu et al., ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/P18-1174.pdf
Presentation:
 P18-1174.Presentation.pdf
Video:
 https://vimeo.com/285804866
Code
 Grayming/ALIL