Interpretable Model Summaries Using the Wasserstein Distance

12/18/2020
by   Eric Dunipace, et al.
0

In the current computing age, models can have hundreds or even thousands of parameters; however, such large models decrease the ability to interpret and communicate individual parameters. Reducing the dimensionality of the parameter space in the estimation phase is a commonly used technique, but less work has focused on selecting subsets of the parameters to focus on for interpretation–especially in settings such as Bayesian inference or bootstrapped frequentist inference that consider a distribution of estimates. Moreover, many models are not themselves easily interpretable and create another layer of obfuscation. To solve this gap, we introduce a new method that uses the Wasserstein distance to generate a low-dimensional interpretable model. After estimation of the main model, users can budget how many parameters they wish to interpret and our method will estimate an interpretable model of the desired dimension that minimizes the distance to the full model. We provide simulation results demonstrating the effectiveness of the proposed method and apply the method to cancer data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset