Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions

01/02/2019
by   Christopher J. Hazard, et al.
0

Machine learning models have become more and more complex in order to better approximate complex functions. Although fruitful in many domains, the added complexity has come at the cost of model interpretability. The once popular k-nearest neighbors (kNN) approach, which finds and uses the most similar data for reasoning, has received much less attention in recent decades due to numerous problems when compared to other techniques. We show that many of these historical problems with kNN can be overcome, and our contribution has applications not only in machine learning but also in online learning, data synthesis, anomaly detection, model compression, and reinforcement learning, without sacrificing interpretability. We introduce a synthesis between kNN and information theory that we hope will provide a clear path towards models that are innately interpretable and auditable. Through this work we hope to gather interest in combining kNN with information theory as a promising path to fully auditable machine learning and artificial intelligence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2018

Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory

As artificial intelligence is increasingly affecting all parts of societ...
research
08/05/2021

Potential Applications of Artificial Intelligence and Machine Learning in Radiochemistry and Radiochemical Engineering

Artificial intelligence and machine learning are poised to disrupt PET i...
research
10/01/2020

When will the mist clear? On the Interpretability of Machine Learning for Medical Applications: a survey

Artificial Intelligence is providing astonishing results, with medicine ...
research
03/11/2019

Physics Enhanced Artificial Intelligence

We propose that intelligently combining models from the domains of Artif...
research
08/17/2022

A Concept and Argumentation based Interpretable Model in High Risk Domains

Interpretability has become an essential topic for artificial intelligen...
research
12/30/2019

A general anomaly detection framework for fleet-based condition monitoring of machines

Machine failures decrease up-time and can lead to extra repair costs or ...
research
04/12/2021

Distributed Learning Systems with First-order Methods

Scalable and efficient distributed learning is one of the main driving f...

Please sign up or login with your details

Forgot password? Click here to reset