Particle filters have become an indispensable tool for model selection, object tracking and statistical inference in high–dimensional problems [1, 2, 3, 4]. While particle filtering works well in many conventional settings, the method is less well-suited when the user faces severe memory restrictions.
Memory restricted problems are more than just curiosities. In control problems in electrical engineering and experimental physics, for instance, it is common that the dynamics of a system can radically change over the time required to communicate between a system and the computer used to control its dynamics [5, 6]. This latency can be reduced to acceptable levels by allowing the inference to performed inside the device itself , but this often places prohibitive restrictions on the processing power and memory of the embedded processors. These restrictions can preclude the use of traditional particle filter methods.
We present an approach that we call rejection filtering (RF) that efficiently samples from an approximation to the posterior by using rejection sampling and resampling together. This allows rejection filtering to estimate the first two moments of the posterior distribution while storing no more than a constant number of samples at a time in typical use cases. Rejection filtering therefore can require significantly less memory than traditional particle filter methods. Moreover, RF can be easily parallelized at a fine-grained level, such that it can be used with an array of small processing cores. Thus, rejection filtering is well suited for inference using hybrid computing and memory-restricted platforms. For example, rejection filtering allows for inference to be embedded in novel contexts such as very small cryogenic controllers , or for integration with existing memory-intensive digital signal processing systems .
We also show that the advantages of rejection filtering are retained in the active learning
case as well, wherein samples correspond to experiments that can be selected and used to optimize the inference procedure. In particular, our approach uses well-motivated experiment design heuristics in conjunction with rejection filtering. This amounts to a form of selective sampling that is computationally inexpensive, easy to parallelize, simple to program and can operate in a much more memory restricted environment than existing approaches[10, 11].
2 Rejection Filtering
We start by discussing rejection sampling methods, as we will draw ideas from these methods to formulate our rejection filter algorithm. For simplicity we take all variables to be discrete in the following, however, the generalization to the continuous case is trivial. Rejection sampling provides a simple and elegant approach to Bayesian inference. It samples from the posterior distribution by first sampling from the prior
. The samples are then accepted with probability
. The probability distribution of the accepted samples is
and the probability of drawing a sample which is then accepted is .
This probability can be increased by instead accepting a sample with probability where is a constant that depends on the evidence such that
. Rescaling the likelihood does not change the posterior probability. It does however make the probability of acceptance, which can dramatically improve the performance when rare events are observed.
Two major drawbacks to this approach have prevented its widescale adoption. The first is that the probability of successfully updating shrinks exponentially with the number of updates in online inference problems. The second is that the constant may not be precisely known such that and thus
cannot be appropriately rescaled to avoid exponentially small likelihoods. Given that the dimension of the Hilbert space that the training vectors reside in scales exponentially with the number of features, exponentially small probabilities are the norm rather than the exception. Our aim is to address these problems by using approximate, rather than exact, rejection sampling, and by combining rejection sampling with ideas fromsequential Monte Carlo methods [1, 2, 3, 4] to allow approximate inference to be performed while only storing summary statistics of the prior distribution.
We make rejection sampling efficient by combining it with particle filtering methods through resampling. Rejection filtering does not try to to propagate samples through many rounds of rejection sampling, but instead uses these samples to inform a new model for the posterior distribution. For example, if we assume that our prior distribution is a Gaussian, then a Gaussian model for the posterior distribution can be found by computing the mean and the covariance matrix for the samples that are accepted by the rejection sampling algorithm. This approach is reminiscent of assumed density filtering , which uses an analogous strategy for modeling the prior distribution but is less memory efficient than our method.
Our method is described in detail in Algorithm 1. We discuss the efficiency of the algorithm in the following theorem. We consider an algorithm to be efficient if it runs in time, where is the number of features in the training vectors.
Assume that can be computed efficiently for all hypotheses , is at most polynomially small for all evidences , can be efficiently sampled and an efficient sampling algorithm for is provided. Algorithm 1 can then efficiently compute the mean and covariance of within error in the max–norm using memory.
A formal proof is given in Appendix D. The intuition is that a sample can be non–deterministically drawn from the posterior distribution by drawing a sample from the prior distribution and rejecting it with probability . Incremental formulas are used in Algorithm 1 to estimate the mean and the covariance using such samples, which obviates the need to store samples in memory in order to estimate the moments of the posterior distribution within error . In practice, one can use the Welford algorithm 
to accumulate means and variances more precisely, but doing so does not change the asymptotic scaling withof the memory required by Algorithm 1.
Width can be traded for depth by batching the data and processing each of these pieces of evidence separately. We use a computational model wherein processing nodes send a stream of the incremental means and covariance sums to a server that combines them to produce the model used in the next step of the inference procedure. Pseudocode for this version is given in Appendix G.
2.1 Bayesian Inference using Approximate Rejection Sampling
Having used resampling to unify rejection sampling and particle filtering, we can significantly improve the complexity of the resulting rejection filtering algorithm by relaxing from exact rejection sampling. Approximate rejection sampling is similar to rejection sampling except that it does not require that . This means that the rescaled likelihood is greater than for some configurations. This inevitably results in errors in the posterior distribution but can make the inference process much more efficient in cases where a tight bound is unknown or when the prior has little support over the region where .
The main question remaining is how the errors incurred from impact the posterior distribution. To understand this, let us define
If the set of bad configurations is non–empty then it naturally leads to errors in the posterior and can degrade the success probability in rejection filtering. Bounds on these effects are provided below.
If the assumptions of Theorem 1 are met, except for the requirement that , and
then approximate rejection sampling is efficient and samples from a distribution such that . The probability of accepting a sample is at least .
Corollary 1 shows that taking a value of that is too small for to be a valid likelihood function does not necessarily result in substantial errors in the posterior distribution. We further elaborate in Appendix E with a numerical example. This leads to an efficient method for approximate sampling from the posterior distribution assuming that is constant and is at most polynomially small. Furthermore, it remains incredibly space efficient since the posterior distribution does not have to be explicitly stored to use rejection filtering.
3 Numerical Experiments
Our first numerical experiment is a proof of principle experiment involving inferring an unknown, time–dependent, frequency of an oscillator. This application is of central importance in quantum mechanics wherein the precise estimation of such frequencies involves substantial computational and experimental effort using traditional techniques [15, 16]. In this case, measurements have two outcomes and the likelihood function takes the form where is the unknown frequency evaluated at time and and are experimental settings that we optimize over to find a near–optimal experiment given the current knowledge of the system. We take to follow a normal random walk in
with standard deviation.
Figure 2 shows that rejection filtering is capable of accurately tracking this unknown frequancy using fewer than samples drawn from the prior distribution. Furthermore, rejection filtering is capable of achieving this using less than kilobit of memory. In contrast, sequential Monte–Carlo methods using Liu–West resampling typically require roughly times the memory that rejection filtering requires because of their need to store an entire particle cloud rather than a single sample.
We provide further details of our experimental design heuristics and apply RF to digit classification in an active learning setting in the appendix. We see, in Figure 2
, that some low variance features are frequently used to classify digits whereas many high variance features are not. We show in the appendix that RF can also beat KNN for an appropriate choice of resampler.
We introduce a method, rejection filtering, that combines rejection sampling with particle filtering. Rejection filtering retains many of the benefits of each, while using substantially less memory than conventional methods for Bayesian inference in typical use cases. In particular, if a Gaussian resampling algorithm is used then our method only requires remembering a single sample at a time, making it ideal for memory-constrained and active learning applications. We further illustrate the viability of our rejection filtering approach through numerical experiments involving tracking the time-dependent drift of an unknown frequency and also in handwriting recognition.
While our work has shown that rejection sampling can be a viable method for performing Bayesian inference, there are many avenues for future work that remain open. One such avenue involves investigating whether ideas borrowed from the particle filter literature, such as the unscented transformation  or genetic mutation-selection algorithms [2, 17], can be adapted to fit our setting. An even more exciting application of these ideas may be to examine their application in online data acquisition in science and engineering. Rejection filtering provides the ability to perform adaptive experiments using embedded hardware, which may lead to a host of applications within object tracking, robotics, and experimental physics that are impractical with existing technology.
Appendix A Supplemental Analysis
a.1 Filtering Distributions for Time-Dependent Models
Bayesian inference has been combined with time-dependent models to perform object tracking and acquisition in many particle filter applications [18, 19]. Here, we show that rejection filtering naturally encompasses these applications by convolving posterior distributions with Gaussian update kernels.
In time-dependent applications the true model is not stationary, but rather changes as observations are made. This poses a challenge for naıve applications of Bayesian inference because drift in the true model can cause it to move outside of the support of the prior distribution. This drift results in the online inference algorithm failing to track an object that moves suddenly and unexpectedly.
To see how this can occur in problems where the true parameters are time-dependent, consider the following likelihood function for a Bernoulli experiment and a family of prior distributions with mean and variance such that the overlap between the likelihood and the prior is given by
If is small then the small deviations of away from introduced by neglecting the time-dependence of can cause the inner product to become exponentially small. This in turn causes the complexity of resampling to be exponentially large, thereby removing guarantees of efficient learning.
Such failures in rejection filtering are heralded by tracking the total number of accepted particles in each update. This is because estimates . Alternatively, we can do better by instead incorporating a prediction step that diffuses the model parameters of each particle . In particular, by convolving the prior with a filter function such as a Gaussian, the width of the resultant distribution can be increased without affecting the prior mean. In a similar way, rejection filtering can be extended to include diffusion by using a resampling kernel that has a broader variance than that of the accepted posterior samples. Doing so allows rejection filtering to track stochastic processes in a similar way to SMC, as described in Section C.
Formally, we model our posterior distribution as
where is a distribution with zero mean and variance and denotes a convolution over . Convolution is in general an expensive operation, but for cases where rejection filtering uses a Gaussian model for the posterior distribution, the resulting distribution remains Gaussian under the convolution if is also a Gaussian. If the variance of the prior distribution is
then it is easy to see from the properties of the Fourier transform that the variance ofis and the mean remains .
a.2 Model Selection
The ability of rejection filtering to include time-dependence is especially useful when combined with Bayesian model selection. Since the random variates of drawn at each step give a frequency drawn from the total likelihood
, we can use rejection filtering to estimate Bayes factors between two different likelihood functions. In particular, the probability that a hypothesiswill be accepted by Algorithm 1 is , so that marginalizing gives the desired . Thus,
at each step is drawn from a binomial distribution with mean. Using hedged maximum likelihood estimation , we can then estimate given , even in the cases that or .
Concretely, consider running rejection filtering in parallel for two distinct models , such that all likelihoods are conditioned on a value for , . The estimated total likelihoods for each rejection filtering run then give an estimate of the Bayes factor ,
If , then model is to be preferred as an explanation of the evidence seen thus far. In particular, the expectation over model parameters penalizes overfitting, such that a model preferred by must justify the dimensionality of . This is made concrete by noting that is well-approximated by the Bayesian information criterion when the prior is a multivariate normal distributon [22, 23].
Using rejection filtering to perform model selection, then, consists of accumulating subsequent values of in a log-likelihood register ,
where superscripts are used to indicate the number of Bayes updates performed, and is a hedging parameter used to prevent divergences that occur when . Since this accumulation procedure estimates the total likelihood from a two-outcome event (acceptance/rejection of a sample), the value of deals with the zero-likelihood case . Letting and be the hedged log-likelihood registers for models and , respectively. Then, the estimator
resulting from this hedging procedure is thus an asymtotically-unbiased estimator for
that has well-defined confidence intervals. The term can be factored out in cases where is constant across models and evidence.
Model selection of this form has been used, for instance, to decide if a diffusive model is appropriate for predicting future evidence . Given the aggressiveness of the rejection filtering resampling step, streaming model selection will be especially important in assessing whether a diffusive inference model has “lost” the true value .
Appendix B Error Analysis
Since our algorithms are only approximate, an important remaining issue is that of error propagation in the estimates of the posterior mean and covariance matrix. We provide bounds on how these errors can spread and provide asymptotic criteria for stability below. For notational convenience, we take to be the inner product between two distributions and to be the induced –norm.
Let be the prior distribution and be an approximation to the prior such that and let and be the posterior distributions after event is observed for where is compact and for all . If then the error in the posterior mean then satisfies
and similarly the error in the expectation of is
Lemma 1 shows that the error in the posterior mean using an approximate prior is small given that the inner product of the likelihood function with the errors is small relative to .
If the assumptions of Lemma 1 are met and the rejection sampling algorithm uses samples from the approximate posterior distribution to infer the posterior mean and then the error in the posterior mean scales as
and the error in the estimate of is
This theorem addresses the question of when we can reasonably expect the update process discussed in Algorithm 1 to be stable. By stable, we mean that small initial errors do not exponentially magnify throughout the update process. Theorem 2 shows that small errors in the prior distribution do not propagate into large errors in the estimates of the mean and posterior matrix given that is sufficiently large. In particular, Theorem 2 and an application of the Cauchy–Schwarz inequality show that such errors are small if , and
However, this does not fully address the question of stability because it does not consider the errors that are incurred from the resampling step.
We can assess the effect of these errors by assuming that, in the domain of interest, the updated model after an experiment satisfies a Lipschitz condition
for some . This implies that error in the approximation to the posterior distribution, obeys
Stability is therefore expected if , and
Thus we expect stability if (a) low likelihood events are rare, and (b) the Lipschitz constant for the model is small. In practice, both of these potential failures can couple together to lead to rapid growth of errors. It is quite common, for example, for errors in the procedure to lead to unrealistically low estimates of the distribution which causes the Lipschitz constant to become large. This in turn could coincide with an unexpected outcome to destabilize the learning algorithm. We overcome this problem by invoking random restarts, as outlined in the next section, though other strategies exist .
Appendix C Numerical Experiments
In this section, we demonstrate rejection filtering both in the context of learning simple functions in a memory-restricted and time-dependent fashion, and in the context of learning more involved models such as handwriting recognition. In both cases, we see that rejection filtering provides significant advantages.
c.1 Multimodal Frequency Estimation
In the main body, we demonstrate the effectiveness of rejection filtering using as an example strongly multimodal and periodic likelihood functions, such as arise in frequency estimation problems drawn from the study of quantum mechanical systems [15, 16]. These likelihood functions serve as useful test cases for Bayesian inference algorithms more generally, as the multimodality of these likelihoods forces a tradeoff between informative experiments and multimodality in the posteriors. Thus, rejection filtering succeeds in these cases only if our method correctly models intermediate distributions so that appropriate experiments can be designed.
Concretely, we consider an inference problem with a single observed variable , decision variables and and with a single hidden variable that we allow to vary with time. Our aim is to infer the current value of given values of obtained for different values of . The likelihood of measuring for these experiments is
where is the index of the current update. The true model is taken to follow a random walk with and the distribution of given is
The goal in such cases is to identify such drifts and perform active feedback to calibrate against the drift.
We design experiments using a heuristic that picks to be a random vector sampled from the prior and . We use this heuristic because it is known to saturate the Bayesian Cramer–Rao bound for this problem [26, 27] and is much faster than adaptively choosing to minimize the Bayes risk, which we take to be the expected quadratic loss after performing an experiment given the current prior.
The performance of rejection filtering applied to this case is shown in Figure 2. In particular, the median error incurred by rejection filtering achieves the optimal achievable error with as few as sampling attempts. Thus, our rejection filtering algorithm continues to be useful in the case of time-dependent and other state-space models. Although this demonstration is quite simple, it is important to emphasize the minuscule memory requirements for this task mean that this tracking problem can be solved using a memory limited processor in a different device that is physically close to the system in question. Close proximity is necessary in applications, such as in control problems or head tracking in virtual reality [7, 28], to make the latency low relative to the dynamical timescale of the object or system that is being tracked.
c.2 Handwriting Recognition
A more common task is handwriting recognition. Our goal in this task is to use Bayesian inference to classify an unknown digit taken from the MNIST repository  into one of two classes. We consider two cases: the restricted case of classifying only digits and digits (zero vs one) and classifying digits as either even or odd (even vs odd).
We cast the problem in the language of Bayesian inference by assuming the likelihood function
which predicts the probability that a given pixel takes the value , given that the training image is the true model that it drew from.
We pick this likelihood function because if we imagine measuring every pixel in the image then the posterior probability distribution, given a uniform initial distribution, will typically be sharply peaked around the training vector that is closest to the observed vector of pixel intensities. Indeed, taking the product over all pixels in an image produces the radial basis function familiar to kernel learning.
Unlike the previous experiment, we do not implement this in a memory–restricted setting because of the size of the MNIST training set. Our goal instead is to view the image classification problem through the lens of active feature learning. We assume that the user has access to the entire training set, but wishes to classify a digit by reading as few of the training vectors features (pixels) to the training image as possible. Although such a lookup is not typically expensive for MNIST, in more general tasks, features may be very expensive to compute, and our experiment shows that our approach enables identification of important features. This reduces the computational requirements of the classification task, and also serves to identify salient features for future classification problems.
Such advantages are especially relevant to search wherein features, such as term occurences and phrase matches across terms, can be expensive to compute on the fly. It also can occur in experimental sciences where each data point may take minutes or hours to either measure or process. In these cases it is vital to minimize the number of queries made to the training data. We will show that our Bayesian inference approach to this problem allows this to be solved using far fewer queries than other methods, such as kNN, would require. We also show that our method can be used to extract the relevant important features (i.e. pixels) from the data set. This is examined further in Appendix F.
We perform these experiments using an adaptive guess heuristic, similar to that employed in the frequency estimation example. The heuristic chooses the pixel label, , that has the largest variance of intensity over the training vectors that compose the particle cloud. We then pick to be the standard deviation of the intensity of that pixel. As learning progresses the diversity in the set of particles considered shrinks, which causes the variance to decrease. Allowing to shrink proportionally accelerates the inference process by amplifying the effect of small differences in .
We repeat this process until the sample probability distribution has converged to a distribution that assigns probability at most to one of the two classes in the problem. This process is restarted a total of or times subject to the constraint that at most queries are made (divided equally over each of the restarts). The label assigned to the test vector is then the most frequently appearing label out of these tests. This ensures that our method incurs at most the same cost as naıve kNN, but is pessimistic as we do not allow the algorithm to store the results of prior queries which could reduce the complexity of the inference.
The resampling step as described in Algorithm 1 is unnatural here because there are only two classes rather than a continuum. Instead, we use a method similar to the bootstrap filter. Specifically, when resampling we draw a number of particles from both classes proportional to the sample frequency in the posterior distribution. For each of these classes we then replicate the surviving particles until of the total population is replenished by copies from the posterior cloud of samples. The remaining are drawn uniformly from the training vectors in the corresponding class.
illustrates the differences between maximum variance experiment design and the adaptive method we use in rejection filtering. These differences are perhaps most clearly seen in the problem of classifying one vs zero digits from MNIST. Our adaptive approach most frequently queries the middle pixel, which is the highest variance feature over the training set. This is unsurprising, but the interesting fact is that the second most frequently queried pixel is one that has relatively low variance over the training set. In contrast, many of the high-variance pixels near the maximum variance pixel are never queried despite the fact that they have large variance over the set. This illustrates that they carry redundant information and can be removed from the training set. Thus this adaptive learning algorithm can also be used to provide a type of model compression, similar to feature selection or PCA.
The case for odd vs even is more complicated. This is because the handwritten examples have much less structure in them and so it is perhaps unsurprising that less dramatic compression is possible when examining that class. However, the data still reveals qualitatively that even here there is a disconnect between the variance of the features over the training set and their importance in classification. An interesting open question is how well rejection filtering can work to select features for other classification methods.
We will now compare rejection filtering to kNN classification. kNN is a standard approach for digit classification that performs well but it can be prohibitively slow for large training sets  (indexing strategies can be used to combat this problem ). In order to make the comparison as fair as possible between the two, we compare kNN to rejection filtering by truncating the MNIST examples by removing pixels that have low variance over the training set. This removes some of the advantage our method has by culling pixels near the boundary of the image that contain very little signal (see Figure 2) and yet substantially contribute to the cost of kNN in an active learning setting.
Feature extraction [33, 34, 35] or the use of deformation models  can also be used to improve the performance of kNN. We ignore them here for simplicity because they will also likely improve rejection filtering.
Figure 3 shows that in certain parameter regimes, approximate Bayesian inference via rejection sampling can not only achieve higher classification accuracy on average for a smaller number of queries to the test vector, but also can achieve less error even if these constraints are removed. This result is somewhat surprising given that we chose our likelihood function to correspond to nearest neighbor classification if is held constant. However, we do not hold constant in our inference but rather choose it adaptively as the experiment proceeds. This changes the weight of the evidence provided by each pixel query and allows our algorithm to outperform kNN classification despite the apparent similarities between them.
Appendix D Proofs of Theorems
In this Appendix, we present proofs for the theorems presented in the main body.
Proof of Theorem 1.
There are two parts to our claim in the theorem. The first is that the rejection sampling algorithm is efficient given the theorem’s assumptions and the second is that it only requires memory to approximate the appropriate low–order moments of the posterior distribution.
For each of the steps in the algorithm the most costly operations are
Sampling from .
Sampling from the uniform distribution.
The calculation of .
The first two of these are efficient by the assumptions of the theorem. Although it may be tempting to claim that efficient algorithms are known for sampling from the uniform distribution, the existence of such deterministic algorithms is unknown since it is not known whether the complexity classes and coincide. The remaining operation can be computed using
arithmetic operations, each of which can be performed (to within bounded accuracy) efficiently on a Turing machine. Therefore the cost of the inner loop iswhich is efficient if is taken to be a constant.
The remaining operations require at most arithmetic operations and thus do not dominate the cost of the algorithm. The main question remaining is how large needs to be and how many bits of precision are required for the arithmetic. Both the error in the mean and the elements of the covariance matrix scale as where is the number of accepted samples that pass through the rejection filter. Thus if both are to be computed within error then
. However, in order to get a sample accepted we see from the Markov inequality and the definition of the exponential distribution thatmust scale like . We then see from Corollary 1 that , which we assume is at most polynomially small. Ergo the sampling process is efficient given these assumptions and the fact that is taken to be a constant for the purposes of defining efficiency.
The dominant requirements for memory arise from the need to store , and . There are at most elements in those matrices and so if each is to be stored within error then at least bits are required. Note that the incremental formulas used in the algorithm are not very numerically stable and need -bit registers to provide an -bit answer. This necessitates doubling the bits of precision, but does not change the asymptotic scaling of the algorithm. Similarly, the repetitions of the algorithm also does not change the asymptotic scaling of the memory because .
What does change the scaling is the truncation error incurred in the matrix multiplication. The computation of a row or column of , for example, involves multiplications and additions. Thus if each such calculation were computed to to within error , the total error is at most by the triangle inequality . Therefore in order to ensure a total error of in each component of the matrix we need to perform the arithmetic using bits of precision.
The incremental statistics, for both quantities, involve summing over all particles used in the rejection filtering algorithm. If we assume that fixed point arithmetic is used to represent the numbers then we run the risk of overflowing the register unless its length grows with . The result then follows. ∎
Proof of Lemma 1.
Using the definition of and Bayes’ rule it is easy to see that the error in the posterior mean is
Using the fact that for all it follows from the assumptions of the theorem and the triangle inequality that (13) is bounded above by
Now using the fact that and the definition of the inner product, we find that (14) is bounded above by
The result then follows from a final application of the triangle inequality.
The analogous proof for the error in the posterior expectation of follows using the exact same argument after replacing the Euclidean norm with the induced –norm for matrices. Since both norms satisfy the triangle inequality, the proof follows using exactly the same steps after observing that for all . ∎
Proof of Theorem 2.
Lemma 1 provides an upper bound on the error in the mean of the posterior distribution that propagates from errors in the components of our prior distribution. We then have that if we sample from this distribution then the sample standard deviation of each of the components of is . Thus from the triangle inequality the sample error in the sum is at most
The triangle inequality shows that the sampling error and the error that propagates from having an incorrect prior are at most additive. Consequently the total error in the mean is at most the the sum of this error and that of Lemma 1. Thus the error in the mean is
The calculation for the error in the estimate of the covariance matrix is similar. First, note that so we can asymptotically ignore . Let , where . We then have from our error bounds on the estimate of the posterior mean that
where we substitute (17) for and use .
Now let us focus on bounding the error in our calculation of . Using the triangle inequality, the error in the estimate of the expectation value of is, to within error , at most
The first term in (19) can be bounded by bounding the sample error in each of the components of the matrix. For any component the Monte–Carlo error in its estimate is
The –Norm of an matrix is at most times its max–norm, which means that
Note that in the previous theorem that we do not make assumptions that the components of are iid. If such assumptions are made then tighter bounds can be proven.
Appendix E Sensitivity to
Theorem 1 in the main body guarantees a bound on the error in a particular Bayes update due to being chosen smaller than for evidence . However, this only holds for a single update, such that it is not immediately clear how such errors propagate for an entire set of experiments. This is especially important for heuristic online experiment design, such as via the particle guess heuristic, or for active learning. To address this, here we show the effects of choosing an inappropriate for the frequency estimation model of inv-model.
In particular, we use the particle guess heuristic to design 100 measurements , then update using and 100 rejection filter attempts per measurement, with a recovery factor of . We then report in Figure 4 the median loss of this procedure as a function of . Importantly, since the this experimental guess heuristic tends to result in likelihoods with peaks of order unity, any will generate bad samples with high probability over the particles.
We see that taking does not prevent the algorithm from learning the true model rapidly, although it certainly does degrade the quadratic loss after measurements. On the other hand, as approaches zero the evidence of learning dissapears. This is to be expected since if for all then all samples are accepted and the approximate posterior will always equal the prior. We similarly notice evidence for modest learning for , and note that a plateau appears in the loss ratio for where the loss ratio does not monotonically grow with . This illustrates that learning still can take place if a value of is used that is substantially less than the ideal value. The ability to tolerate small values of is important in cases where is not known apriori and empirical values, such as those obtained via Monte–Carlo sampling, must be used to make the rejection probability polynomially small.
Appendix F Feature Selection for Rejection Sampling
In the main body we alluded to the fact that the frequency with which a feature is selected in rejection filtering can be used as an estimate of the importance of that feature to the classification. We saw evidence that this application would be reasonable from noting in MNIST classification problems that some pixels were never, or very infrequently, used and so should be able to be culled from the vectors without substantially affecting the classification accuracy.
Here we investigate this further for the problem of classifying zero vs one digits within MNIST. We consider the case where restarts, each consisting of at most samples, are performed with . Our protocol begins by computing the frequency that pixels are queried over the full case where all features in the MNIST images are used and then computing a histogram of how many samples were drawn for each pixel. Then we divide the pixels into sets based the relative frequency that they were used. For example, we find all pixels whose frequency is less than the percentile and remove the offending pixels from the entire MNIST set. We then perform the classification on using only these pixels and compare to the original classification accuracy. The number of features as a function of the percentile considered for zero vs one is given below.
|Percentile||Number of Features|
This strongly suggests that truncating at least of the data will not impact the classification whatsoever. This does not, however, mean that the pixels that become significant at the percentile will substantially affect classification accuracy since an acceptable classification might be reached using other, more significant, features in the set.
We examine the feature extraction ability of rejection filtering by looking at the mean classification accuracy over different random shufflings of the vectors in the two classes for MNIST digits, using a to train–test split. The data is shown in Figure 5. We see that if we use the frequency data provided by rejection sampling to remove all but the most significant of the training data then the classification accuracy is not significantly reduced. After this point the impact of ignoring features begins to grow roughly linearly with the fraction of points ignored. This continues until only the training examples are trimmed down to of their original length (39 features) before the classification accuracy becomes strongly impacted by removing features from consideration. Nonetheless, for all the points considered the classification accuracy remains high (greater than ).
This shows that we can use this approach to choose salient features from an ensemble and by following this criteria we can cull unimportant features from a large set based on the frequencies that the data was needed in training rejection filtering experiments. While this process shows advantages in this simple case, more involved experiments are needed to compare its performance to existing feature extraction methods such as PCA and to determine whether adaptive schemes that iteratively find the most important features as ever increasing portions of the original vectors are culled will be needed to achieve good performance in complex data sets.
Appendix G Batched Updating
Although we focused in the main body on memory restricted applications, it is also possible to exploit the fact that the rejection sampling procedure is inherently parallelizable. This comes at the price of increasing the overall memory usage. In Algorithm 2, we describe a batched form of our algorithm, assuming a model in which samples are sent by a server to individual processing nodes and the accepted samples are then returned to the server.
-  Arnaud Doucet, Simon Godsill, and Christophe Andrieu. On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and computing, 10(3):197–208, 2000.
-  Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. An adaptive sequential Monte Carlo method for approximate Bayesian computation. Statistics and Computing, 22(5):1009–1020, 2012.
-  Rudolph Van Der Merwe, Arnaud Doucet, Nando De Freitas, and Eric Wan. The unscented particle filter. In NIPS, pages 584–590, 2000.
-  Jane Liu and Mike West. Combined parameter and state estimation in simulation-based filtering. In Sequential Monte Carlo methods in practice, pages 197–223. Springer, 2001.
-  Hubert Halloin, Pierre Prat, and Julien Brossard. Long term characterization of voltage references. arXiv:1312.5101 [astro-ph, physics:physics], December 2013. arXiv: 1312.5101.
M. D. Shulman, S. P. Harvey, J. M. Nichol, S. D. Bartlett, A. C. Doherty,
V. Umansky, and A. Yacoby.
Suppressing qubit dephasing using real-time Hamiltonian estimation.Nature Communications, 5, October 2014.
-  Steve LaValle. Sensor fusion: Keeping it simple, May 2013.
-  J. M. Hornibrook, J. I. Colless, I. D. Conway Lamb, S. J. Pauka, H. Lu, A. C. Gossard, J. D. Watson, G. C. Gardner, S. Fallahi, M. J. Manfra, and D. J. Reilly. Cryogenic control architecture for large-scale quantum computing. Physical Review Applied, 3(2):024010, February 2015.
-  Steven Casagrande. On design and testing of a spectrometer based on an FPGA development board for use with optimal control theory and high-Q resonators. February 2014.
-  Sayanan Sivaraman and Mohan Manubhai Trivedi. A general active-learning framework for on-road vehicle recognition and tracking. Intelligent Transportation Systems, IEEE Transactions on, 11(2):267–276, 2010.
-  Ashish Kapoor, Kristen Grauman, Raquel Urtasun, and Trevor Darrell. Active learning with gaussian processes for object categorization. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1–8. IEEE, 2007.
Thomas P Minka.
Expectation propagation for approximate Bayesian inference.
Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 362–369. Morgan Kaufmann Publishers Inc., 2001.
-  B. P. Welford. Note on a method for calculating corrected sums of squares and products. Technometrics, 4(3):pp. 419–420, 1962.
-  Nathan Wiebe, Ashish Kapoor, Christopher Granade, and Krysta M Svore. Quantum inspired training for Boltzmann machines. arXiv preprint arXiv:1507.02642, 2015.
-  Nathan Wiebe and Christopher E. Granade. Efficient Bayesian phase estimation. arXiv:1508.00869 [quant-ph], August 2015. arXiv: 1508.00869.
-  Christopher Ferrie, Christopher E. Granade, and D. G. Cory. How to best sample a periodic probability distribution, or on the accuracy of Hamiltonian finding strategies. Quantum Information Processing, 12(1):611–623, January 2013.
-  Pierre Del Moral and Laurent Miclo. Branching and interacting particle systems approximations of Feynman-Kac formulae with applications to non-linear filtering. Springer, 2000.
-  Michael Isard and Andrew Blake. CONDENSATION—Conditional Density Propagation for visual tracking. International Journal of Computer Vision, 29(1):5–28, August 1998.
-  F. Gustafsson, F. Gunnarsson, Niclas Bergman, U. Forssell, J. Jansson, R. Karlsson, and P.-J. Nordlund. Particle filters for positioning, navigation, and tracking. IEEE Transactions on Signal Processing, 50(2):425–437, February 2002.
-  Christopher Ferrie and Robin Blume-Kohout. Estimating the bias of a noisy coin. In AIP Conference Proceedings, volume 1443, pages 14–21. AIP Publishing, May 2012.
-  Hirotugu Akaike. Likelihood of a model and information criteria. Journal of Econometrics, 16(1):3–14, May 1981.
-  Hirotugu Akaike. Likelihood and the Bayes procedure. Trabajos de Estadistica Y de Investigacion Operativa, 31(1):143–166, February 1980.
-  Adrian E Raftery. Bayes factors and bic. Sociological Methods & Research, 27(3):411–417, 1999.
-  Hokwon Cho. Approximate confidence limits for the ratio of two binomial variates with unequal sample sizes. Communications for Statistical Applications and Methods, 20(5):347–356, September 2013.
-  Christopher E. Granade. Characterization, verification and control for large quantum systems, 2015.
-  J. Dauwels. Computing Bayesian Cramer-Rao bounds. In International Symposium on Information Theory, 2005. ISIT 2005. Proceedings, pages 425–429. IEEE, September 2005.
-  Nathan Wiebe, Christopher Granade, Christopher Ferrie, and D. G. Cory. Hamiltonian learning and certification using quantum resources. Physical Review Letters, 112(19):190501, May 2014.
-  Richard Yao, Tom Heath, Aaron Davies, Tom Forsyth, Nate Mitchell, and Perry Hoberman. Oculus vr best practices guide. Oculus VR, 2014.
Yann LeCun, Corinna Cortes, and Christopher JC Burges.
The mnist database of handwritten digits, 1998.
Bernhard Schölkopf and Alexander J. Smola.
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. The MIT Press, Cambridge, Mass, 1st edition edition, December 2001.
-  Cui Yu, Rui Zhang, Yaochun Huang, and Hui Xiong. High-dimensional kNN joins with incremental updates. Geoinformatica, 14(1):55–82, 2010.
-  Cui Yu, Beng Chin Ooi, Kian-Lee Tan, and HV Jagadish. Indexing the distance: An efficient method to kNN processing. In VLDB, volume 1, pages 421–430, 2001.
Hao Zhang, Alexander C Berg, Michael Maire, and Jitendra Malik.
SVM-KNN: Discriminative nearest neighbor classification for visual
Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2126–2136. IEEE, 2006.
Kilian Q Weinberger and Lawrence K Saul.
Fast solvers and efficient implementations for distance metric
Proceedings of the 25th international conference on Machine learning, pages 1160–1167. ACM, 2008.
-  Renqiang Min, David Stanley, Zineng Yuan, Anthony Bonner, Zhaolei Zhang, et al. A deep non-linear feature mapping for large-margin kNN classification. In Data Mining, 2009. ICDM’09. Ninth IEEE International Conference on, pages 357–366. IEEE, 2009.
-  Daniel Keysers, Thomas Deselaers, Christian Gollan, and Hermann Ney. Deformation models for image recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(8):1422–1435, 2007.