COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data

12/02/2022
by   Jonas C. Ditz, et al.
0

Motivation: The size of available omics datasets is steadily increasing with technological advancement in recent years. While this increase in sample size can be used to improve the performance of relevant prediction tasks in healthcare, models that are optimized for large datasets usually operate as black boxes. In high stakes scenarios, like healthcare, using a black-box model poses safety and security issues. Without an explanation about molecular factors and phenotypes that affected the prediction, healthcare providers are left with no choice but to blindly trust the models. We propose a new type of artificial neural networks, named Convolutional Omics Kernel Networks (COmic). By combining convolutional kernel networks with pathway-induced kernels, our method enables robust and interpretable end-to-end learning on omics datasets ranging in size from a few hundred to several hundreds of thousands of samples. Furthermore, COmic can be easily adapted to utilize multi-omics data. Results: We evaluate the performance capabilities of COmic on six different breast cancer cohorts. Additionally, we train COmic models on multi-omics data using the METABRIC cohort. Our models perform either better or similar to competitors on both tasks. We show how the use of pathway-induced Laplacian kernels opens the black-box nature of neural networks and results in intrinsically interpretable models that eliminate the need for post-hoc explanation models.

READ FULL TEXT

page 7

page 18

research
05/31/2019

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

Most of the work on interpretable machine learning has focused on design...
research
05/27/2019

Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction

Although "black box" models such as Artificial Neural Networks, Support ...
research
11/03/2021

Convolutional Motif Kernel Networks

Artificial neural networks are exceptionally good in learning to detect ...
research
03/31/2022

Interpretation of Black Box NLP Models: A Survey

An increasing number of machine learning models have been deployed in do...
research
03/29/2018

PIMKL: Pathway Induced Multiple Kernel Learning

Reliable identification of molecular biomarkers is essential for accurat...
research
08/26/2020

Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction

Explaining recommendations enables users to understand whether recommend...
research
02/10/2022

Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

The importance of explainability in AI has become a pressing concern, fo...

Please sign up or login with your details

Forgot password? Click here to reset