2019
pdf
abs
On the Compression of Lexicon Transducers
Marco Cognetta

Cyril Allauzen

Michael Riley
Proceedings of the 14th International Conference on FiniteState Methods and Natural Language Processing
In finitestate language processing pipelines, a lexicon is often a key component. It needs to be comprehensive to ensure accuracy, reducing outofvocabulary misses. However, in memoryconstrained environments (e.g., mobile phones), the size of the component automata must be kept small. Indeed, a delicate balance between comprehensiveness, speed, and memory must be struck to conform to device requirements while providing a good user experience.In this paper, we describe a compression scheme for lexicons when represented as finitestate transducers. We efficiently encode the graph of the transducer while storing transition labels separately. The graph encoding scheme is based on the LOUDS (Level Order Unary Degree Sequence) tree representation, which has constant time tree traversal for queries while being informationtheoretically optimal in space. We find that our encoding is near the theoretical lower bound for such graphs and substantially outperforms more traditional representations in space while remaining competitive in latency benchmarks.
pdf
abs
Online Infix Probability Computation for Probabilistic Finite Automata
Marco Cognetta

YoSub Han

Soon Chan Kwon
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Probabilistic finite automata (PFAs) are com mon statistical language model in natural lan guage and speech processing. A typical task for PFAs is to compute the probability of all strings that match a query pattern. An impor tant special case of this problem is computing the probability of a string appearing as a pre fix, suffix, or infix. These problems find use in many natural language processing tasks such word prediction and text error correction. Recently, we gave the first incremental algorithm to efficiently compute the infix probabilities of each prefix of a string (Cognetta et al., 2018). We develop an asymptotic improvement of that algorithm and solve the open problem of computing the infix probabilities of PFAs from streaming data, which is crucial when process ing queries online and is the ultimate goal of the incremental approach.
pdf
abs
SoftRegex: Generating Regex from Natural Language Descriptions using Softened Regex Equivalence
JunU Park

SangKi Ko

Marco Cognetta

YoSub Han
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)
We continue the study of generating semantically correct regular expressions from natural language descriptions (NL). The current stateoftheart model SemRegex produces regular expressions from NLs by rewarding the reinforced learning based on the semantic (rather than syntactic) equivalence between two regular expressions. Since the regular expression equivalence problem is PSPACEcomplete, we introduce the EQ_Reg model for computing the similarity of two regular expressions using deep neural networks. Our EQ_Reg model essentially softens the equivalence of two regular expressions when used as a reward function. We then propose a new regex generation model, SoftRegex, using the EQ_Reg model, and empirically demonstrate that SoftRegex substantially reduces the training time (by a factor of at least 3.6) and produces stateoftheart results on three benchmark datasets.
2018
pdf
abs
Incremental Computation of Infix Probabilities for Probabilistic Finite Automata
Marco Cognetta

YoSub Han

Soon Chan Kwon
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
In natural language processing, a common task is to compute the probability of a phrase appearing in a document or to calculate the probability of all phrases matching a given pattern. For instance, one computes affix (prefix, suffix, infix, etc.) probabilities of a string or a set of strings with respect to a probability distribution of patterns. The problem of computing infix probabilities of strings when the pattern distribution is given by a probabilistic contextfree grammar or by a probabilistic finite automaton is already solved, yet it was open to compute the infix probabilities in an incremental manner. The incremental computation is crucial when a new query is built from a previous query. We tackle this problem and suggest a method that computes infix probabilities incrementally for probabilistic finite automata by representing all the probabilities of matching strings as a series of transition matrix calculations. We show that the proposed approach is theoretically faster than the previous method and, using real world data, demonstrate that our approach has vastly better performance in practice.