Stress and Adaptation: Applying Anna Karenina Principle in Deep Learning for Image Classification

02/22/2023
by   Nesma Mahmoud, et al.
10

Image classification with deep neural networks has reached state-of-art with high accuracy. This success is attributed to good internal representation features that bypasses the difficulties of the non-convex optimization problems. We have little understanding of these internal representations, let alone quantifying them. Recent research efforts have focused on alternative theories and explanations of the generalizability of these deep networks. We propose the alternative perturbation of deep models during their training induces changes that lead to transitions to different families. The result is an Anna Karenina Principle AKP for deep learning, in which less generalizable models unhappy families vary more in their representation than more generalizable models happy families paralleling Leo Tolstoy dictum that all happy families look alike, each unhappy family is unhappy in its own way. Anna Karenina principle has been found in systems in a wide range: from the surface of endangered corals exposed to harsh weather to the lungs of patients suffering from fatal diseases of AIDs. In our paper, we have generated artificial perturbations to our model by hot-swapping the activation and loss functions during the training. In this paper, we build a model to classify cancer cells from non-cancer ones. We give theoretical proof that the internal representations of generalizable happy models are similar in the asymptotic limit. Our experiments verify similar representations of generalizable models.

READ FULL TEXT
research
12/11/2018

On the Ineffectiveness of Variance Reduced Optimization for Deep Learning

The application of stochastic variance reduction to optimization has sho...
research
10/10/2020

On The Convergence of First Order Methods for Quasar-Convex Optimization

In recent years, the success of deep learning has inspired many research...
research
10/26/2017

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural...
research
12/16/2020

Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience – an initial exploration

Artificial intelligence has made great strides since the deep learning r...
research
05/27/2022

On the Symmetries of Deep Learning Models and their Internal Representations

Symmetry has been a fundamental tool in the exploration of a broad range...
research
09/01/2020

Learning explanations that are hard to vary

In this paper, we investigate the principle that `good explanations are ...

Please sign up or login with your details

Forgot password? Click here to reset