Sampling Permutations for Shapley Value Estimation

by   Rory Mitchell, et al.

Game-theoretic attribution techniques based on Shapley values are used extensively to interpret black-box machine learning models, but their exact calculation is generally NP-hard, requiring approximation methods for non-trivial models. As the computation of Shapley values can be expressed as a summation over a set of permutations, a common approach is to sample a subset of these permutations for approximation. Unfortunately, standard Monte Carlo sampling methods can exhibit slow convergence, and more sophisticated quasi Monte Carlo methods are not well defined on the space of permutations. To address this, we investigate new approaches based on two classes of approximation methods and compare them empirically. First, we demonstrate quadrature techniques in a RKHS containing functions of permutations, using the Mallows kernel to obtain explicit convergence rates of O(1/n), improving on O(1/√(n)) for plain Monte Carlo. The RKHS perspective also leads to quasi Monte Carlo type error bounds, with a tractable discrepancy measure defined on permutations. Second, we exploit connections between the hypersphere 𝕊^d-2 and permutations to create practical algorithms for generating permutation samples with good properties. Experiments show the above techniques provide significant improvements for Shapley value estimates over existing methods, converging to a smaller RMSE in the same number of model evaluations.


A weighted Discrepancy Bound of quasi-Monte Carlo Importance Sampling

Importance sampling Monte-Carlo methods are widely used for the approxim...

Monte Carlo Techniques for Approximating the Myerson Value – Theoretical and Empirical Analysis

Myerson first introduced graph-restricted games in order to model the in...

Discrepancy-based Inference for Intractable Generative Models using Quasi-Monte Carlo

Intractable generative models are models for which the likelihood is una...

Doubly Robust Stein-Kernelized Monte Carlo Estimator: Simultaneous Bias-Variance Reduction and Supercanonical Convergence

Standard Monte Carlo computation is widely known to exhibit a canonical ...

Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels

We consider the problem of improving the efficiency of randomized Fourie...

An evaluation of estimation techniques for probabilistic reachability

We evaluate numerically-precise Monte Carlo (MC), Quasi-Monte Carlo (QMC...

On the error rate of importance sampling with randomized quasi-Monte Carlo

Importance sampling (IS) is valuable in reducing the variance of Monte C...