Efficient algorithms for modifying and sampling from a categorical distribution
Probabilistic programming languages and other machine learning applications often require samples to be generated from a categorical distribution where the probability of each one of n categories is specified as a parameter. If the parameters are hyper-parameters then they need to be modified, however, current implementations of categorical distributions take O(n) time to modify a parameter. If n is large and the parameters are being frequently modified, this can become prohibitive. Here we present the insight that a Huffman tree is an efficient data structure for representing categorical distributions and present algorithms to generate samples as well as add, delete and modify categories in O((n)) time. We demonstrate that the time to sample from the distribution remains, in practice, within a few percent of the theoretical optimal value. The same algorithm may also be useful in the context of adaptive Huffman coding where computational efficiency is important.
READ FULL TEXT