DeepAI AI Chat
Log In Sign Up

A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models

by   Josh Kalin, et al.

Each machine learning model deployed into production has a risk of adversarial attack. Quantifying the contributing factors and uncertainties using empirical measures could assist the industry with assessing the risk of downloading and deploying common machine learning model types. The Drake Equation is famously used for parameterizing uncertainties and estimating the number of radio-capable extra-terrestrial civilizations. This work proposes modifying the traditional Drake Equation's formalism to estimate the number of potentially successful adversarial attacks on a deployed model. While previous work has outlined methods for discovering vulnerabilities in public model architectures, the proposed equation seeks to provide a semi-quantitative benchmark for evaluating the potential risk factors of adversarial attacks.


page 1

page 2

page 3

page 4


Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks

Production machine learning systems are consistently under attack by adv...

Risk Assessment for Machine Learning Models

In this paper we propose a framework for assessing the risk associated w...

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Machine learning models are currently being deployed in a variety of rea...

Overparameterized Linear Regression under Adversarial Attacks

As machine learning models start to be used in critical applications, th...

Risk Management Framework for Machine Learning Security

Adversarial attacks for machine learning models have become a highly stu...

Undecidability of Learnability

Machine learning researchers and practitioners steadily enlarge the mult...