A Framework for Adaptive MCMC Targeting Multimodal Distributions
We propose a new Monte Carlo method for sampling from multimodal distributions. The idea of this technique is based on splitting the task into two: finding the modes of a target distribution π and sampling, given the knowledge of the locations of the modes. The sampling algorithm relies on steps of two types: local ones, preserving the mode; and jumps to regions associated with different modes. Besides, the method learns the optimal parameters of the algorithm while it runs, without requiring user intervention. Our technique should be considered as a flexible framework, in which the design of moves can follow various strategies known from the broad MCMC literature. In order to control the jumps, we introduce an auxiliary variable representing each mode and we define a new target distribution π̃ on an augmented state space X × I, where X is the original state space of π and I is the set of the modes. As the algorithm runs and updates its parameters, the target distribution π̃ also keeps being modified. This motivates a new class of algorithms, Auxiliary Variable Adaptive MCMC. We prove some general ergodic results for the whole class before specialising to the case of our algorithm. Our results are analogous to ergodic theorems for Adaptive MCMC. The performance of the algorithm is illustrated with several multimodal examples.
READ FULL TEXT