Discovering Invariances in Healthcare Neural Networks

11/08/2019
by   Mohammad Taha Bahadori, et al.
0

We study the invariance characteristics of pre-trained predictive models by empirically learning transformations on the input that leave the prediction function approximately unchanged. To learn invariance transformations, we minimize the Wasserstein distance between the predictive distribution conditioned on the data instances and the predictive distribution conditioned on the transformed data instances. To avoid finding degenerate or perturbative transformations, we further regularize by adding a similarity term between the data and its transformed values. Applying the proposed technique to clinical time series data, we discover variables that commonly-used LSTM models do not rely on for their prediction, especially when the LSTM is trained to be adversarially robust.

READ FULL TEXT
research
04/27/2019

Temporal-Clustering Invariance in Irregular Healthcare Time Series

Electronic records contain sequences of events, some of which take place...
research
05/07/2020

Predictive Analysis of COVID-19 Time-series Data from Johns Hopkins University

We provide a predictive analysis of the spread of COVID-19, also known a...
research
08/07/2023

On genuine invariance learning without weight-tying

In this paper, we investigate properties and limitations of invariance l...
research
11/10/2021

Understanding the Generalization Benefit of Model Invariance from a Data Perspective

Machine learning models that are developed to be invariant under certain...
research
11/02/2021

Equivariant Deep Dynamical Model for Motion Prediction

Learning representations through deep generative modeling is a powerful ...
research
03/23/2019

Data-driven Prognostics with Predictive Uncertainty Estimation using Ensemble of Deep Ordinal Regression Models

Prognostics or Remaining Useful Life (RUL) Estimation from multi-sensor ...

Please sign up or login with your details

Forgot password? Click here to reset