Understanding Information Processing in Human Brain by Interpreting Machine Learning Models

10/17/2020
by   Ilya Kuzovkin, et al.
0

The thesis explores the role machine learning methods play in creating intuitive computational models of neural processing. Combined with interpretability techniques, machine learning could replace human modeler and shift the focus of human effort to extracting the knowledge from the ready-made models and articulating that knowledge into intuitive descroptions of reality. This perspective makes the case in favor of the larger role that exploratory and data-driven approach to computational neuroscience could play while coexisting alongside the traditional hypothesis-driven approach. We exemplify the proposed approach in the context of the knowledge representation taxonomy with three research projects that employ interpretability techniques on top of machine learning methods at three different levels of neural organization. The first study (Chapter 3) explores feature importance analysis of a random forest decoder trained on intracerebral recordings from 100 human subjects to identify spectrotemporal signatures that characterize local neural activity during the task of visual categorization. The second study (Chapter 4) employs representation similarity analysis to compare the neural responses of the areas along the ventral stream with the activations of the layers of a deep convolutional neural network. The third study (Chapter 5) proposes a method that allows test subjects to visually explore the state representation of their neural signal in real time. This is achieved by using a topology-preserving dimensionality reduction technique that allows to transform the neural data from the multidimensional representation used by the computer into a two-dimensional representation a human can grasp. The approach, the taxonomy, and the examples, present a strong case for the applicability of machine learning methods to automatic knowledge discovery in neuroscience.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2021

BIKED: A Dataset and Machine Learning Benchmarks for Data-Driven Bicycle Design

In this paper, we present "BIKED," a dataset comprised of 4500 individua...
research
11/20/2017

"I know it when I see it". Visualization and Intuitive Interpretability

Most research on the interpretability of machine learning systems focuse...
research
07/18/2018

Machine Learning Interpretability: A Science rather than a tool

The term "interpretability" is oftenly used by machine learning research...
research
11/01/2017

Building Data-driven Models with Microstructural Images: Generalization and Interpretability

As data-driven methods rise in popularity in materials science applicati...
research
09/28/2016

Towards the effectiveness of Deep Convolutional Neural Network based Fast Random Forest Classifier

Deep Learning is considered to be a quite young in the area of machine l...
research
07/31/2023

To Classify is to Interpret: Building Taxonomies from Heterogeneous Data through Human-AI Collaboration

Taxonomy building is a task that requires interpreting and classifying d...
research
08/17/2022

"Task-relevant autoencoding" enhances machine learning for human neuroscience

In human neuroscience, machine learning can help reveal lower-dimensiona...

Please sign up or login with your details

Forgot password? Click here to reset