Fooling the classifier: Ligand antagonism and adversarial examples

07/10/2018
by   Thomas J. Rademaker, et al.
6

Machine learning algorithms are sensitive to so-called adversarial perturbations. This is reminiscent of cellular decision-making where antagonist ligands may prevent correct signaling, like during the early immune response. We draw a formal analogy between neural networks used in machine learning and the general class of adaptive proofreading networks. We then apply simple adversarial strategies from machine learning to models of ligand discrimination. We show how kinetic proofreading leads to "boundary tilting" and identify three types of perturbation (adversarial, non adversarial and ambiguous). We then use a gradient-descent approach to compare different adaptive proofreading models, and we reveal the existence of two qualitatively different regimes characterized by the presence or absence of a critical point. These regimes are reminiscent of the "feature-to-prototype" transition identified in machine learning, corresponding to two strategies in ligand antagonism (broad vs. specialized). Overall, our work connects evolved cellular decision-making to classification in machine learning, showing that behaviours close to the decision boundary can be understood through the same mechanisms.

READ FULL TEXT

page 3

page 8

page 11

page 14

research
04/12/2019

Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients

We focus our attention on the problem of generating adversarial perturba...
research
12/12/2017

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible ...
research
12/01/2016

Towards Robust Deep Neural Networks with BANG

Machine learning models, including state-of-the-art deep neural networks...
research
10/29/2020

Robustifying Binary Classification to Adversarial Perturbation

Despite the enormous success of machine learning models in various appli...
research
02/05/2020

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

Despite achieving remarkable performance on many image classification ta...
research
09/22/2021

Exploring Adversarial Examples for Efficient Active Learning in Machine Learning Classifiers

Machine learning researchers have long noticed the phenomenon that the m...
research
04/05/2023

Physics-Inspired Interpretability Of Machine Learning Models

The ability to explain decisions made by machine learning models remains...

Please sign up or login with your details

Forgot password? Click here to reset