
How far from automatically interpreting deep learning
In recent years, deep learning researchers have focused on how to find t...
read it

Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018)
This is the Proceedings of the 2018 ICML Workshop on Human Interpretabil...
read it

Machine Learning Interpretability: A Science rather than a tool
The term "interpretability" is oftenly used by machine learning research...
read it

Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016)
This is the Proceedings of the 2016 ICML Workshop on Human Interpretabil...
read it

Learning primaldual sparse kernel machines
Traditionally, kernel methods rely on the representer theorem which stat...
read it

Constrained Generalized Additive 2 Model with Consideration of HighOrder Interactions
In recent years, machine learning and AI have been introduced in many in...
read it

Inducing Interpretability in Knowledge Graph Embeddings
We study the problem of inducing interpretability in KG embeddings. Spec...
read it
How to improve the interpretability of kernel learning
In recent years, machine learning researchers have focused on methods to construct flexible and interpretable prediction models. However, the interpretability evaluation, the relationship between the generalization performance and the interpretability of the model and the method for improving the interpretability are very important factors to consider. In this paper, the quantitative index of the interpretability is proposed and its rationality is given, and the relationship between the interpretability and the generalization performance is analyzed. For traditional supervised kernel machine learning problem, a universal learning framework is put forward to solve the equilibrium problem between the two performances. The uniqueness of solution of the problem is proved and condition of unique solution is obtained. Probability upper bound of the sum of the two performances is analyzed.
READ FULL TEXT
Comments
There are no comments yet.