A Weighted Solution to SVM Actionability and Interpretability
Research in machine learning has successfully developed algorithms to build accurate classification models. However, in many real-world applications, such as healthcare, customer satisfaction, and environment protection, we want to be able to use the models to decide what actions to take. We investigate the concept of actionability in the context of Support Vector Machines. Actionability is as important as interpretability or explainability of machine learning models, an ongoing and important research topic. Actionability is the task that gives us ways to act upon machine learning models and their predictions. This paper finds a solution to the question of actionability on both linear and non-linear SVM models. Additionally, we introduce a way to account for weighted actions that allow for more change in certain features than others. We propose a gradient descent solution on the linear, RBF, and polynomial kernels, and we test the effectiveness of our models on both synthetic and real datasets. We are also able to explore the model's interpretability through the lens of actionability.
READ FULL TEXT