Techniques for Interpretable Machine Learning

07/31/2018
by   Mengnan Du, et al.
0

Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these classifiers arrive at a particular decision. Although many approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. This paper provides a survey covering existing techniques and methods to increase the interpretability of machine learning models and also discusses the crucial issues to consider in future work such as interpretation design principles and evaluation metrics in order to push forward the area of interpretable machine learning.

READ FULL TEXT
research
11/27/2017

Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning

This is the Proceedings of NIPS 2017 Symposium on Interpretable Machine ...
research
03/20/2021

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges

Interpretability in machine learning (ML) is crucial for high stakes dec...
research
08/04/2019

A systematic review of fuzzing based on machine learning techniques

Security vulnerabilities play a vital role in network security system. F...
research
06/22/2018

Learning Qualitatively Diverse and Interpretable Rules for Classification

There has been growing interest in developing accurate models that can a...
research
12/02/2014

Learning interpretable models of phenotypes from whole genome sequences with the Set Covering Machine

The increased affordability of whole genome sequencing has motivated its...
research
03/27/2023

Interpretable machine learning of amino acid patterns in proteins: a statistical ensemble approach

Explainable and interpretable unsupervised machine learning helps unders...

Please sign up or login with your details

Forgot password? Click here to reset