Meaningful Models: Utilizing Conceptual Structure to Improve Machine Learning Interpretability

07/01/2016
by   Nick Condry, et al.
0

The last decade has seen huge progress in the development of advanced machine learning models; however, those models are powerless unless human users can interpret them. Here we show how the mind's construction of concepts and meaning can be used to create more interpretable machine learning models. By proposing a novel method of classifying concepts, in terms of 'form' and 'function', we elucidate the nature of meaning and offer proposals to improve model understandability. As machine learning begins to permeate daily life, interpretable models may serve as a bridge between domain-expert authors and non-expert users.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2017

Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning

This is the Proceedings of NIPS 2017 Symposium on Interpretable Machine ...
research
02/06/2020

Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind

Machine learning is a means to uncover deep patterns from rich sources o...
research
05/01/2023

Logion: Machine Learning for Greek Philology

This paper presents machine-learning methods to address various problems...
research
11/11/2022

Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning

We introduce a family of interpretable machine learning models, with two...
research
05/11/2022

Unsupervised machine learning for physical concepts

In recent years, machine learning methods have been used to assist scien...
research
01/30/2023

MOSAIC, acomparison framework for machine learning models

We introduce MOSAIC, a Python program for machine learning models. Our f...
research
03/22/2019

Expert-Augmented Machine Learning

Machine Learning is proving invaluable across disciplines. However, its ...

Please sign up or login with your details

Forgot password? Click here to reset