Abstract
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL “policy” using “imitation learning” (IL). Our IL-based approach makes use of an efficient and effective “algorithmic expert”, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.- Anthology ID:
- P18-1174
- Volume:
- Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1874–1883
- Language:
- URL:
- https://aclanthology.org/P18-1174
- DOI:
- 10.18653/v1/P18-1174
- Cite (ACL):
- Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning How to Actively Learn: A Deep Imitation Learning Approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874–1883, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Learning How to Actively Learn: A Deep Imitation Learning Approach (Liu et al., ACL 2018)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/P18-1174.pdf
- Code
- Grayming/ALIL