Approximating Probabilistic Models as Weighted Finite Automata

Ananda Theertha Suresh, Brian Roark, Michael Riley, Vlad Schogol


Abstract
Weighted finite automata (WFAs) are often used to represent probabilistic models, such as n-gram language models, because among other things, they are efficient for recognition tasks in time and space. The probabilistic source to be represented as a WFA, however, may come in many forms. Given a generic probabilistic model over sequences, we propose an algorithm to approximate it as a WFA such that the Kullback-Leibler divergence between the source model and the WFA target model is minimized. The proposed algorithm involves a counting step and a difference of convex optimization step, both of which can be performed efficiently. We demonstrate the usefulness of our approach on various tasks, including distilling n-gram models from neural models, building compact language models, and building open-vocabulary character models. The algorithms used for these experiments are available in an open-source software library.
Anthology ID:
2021.cl-2.9
Volume:
Computational Linguistics, Volume 47, Issue 2 - June 2021
Month:
June
Year:
2021
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
221–254
Language:
URL:
https://aclanthology.org/2021.cl-2.9
DOI:
10.1162/coli_a_00401
Bibkey:
Cite (ACL):
Ananda Theertha Suresh, Brian Roark, Michael Riley, and Vlad Schogol. 2021. Approximating Probabilistic Models as Weighted Finite Automata. Computational Linguistics, 47(2):221–254.
Cite (Informal):
Approximating Probabilistic Models as Weighted Finite Automata (Suresh et al., CL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2021.cl-2.9.pdf