Machine Learning needs its own Randomness Standard: Randomised Smoothing and PRNG-based attacks

06/24/2023
by   Pranav Dahiya, et al.
0

Randomness supports many critical functions in the field of machine learning (ML) including optimisation, data selection, privacy, and security. ML systems outsource the task of generating or harvesting randomness to the compiler, the cloud service provider or elsewhere in the toolchain. Yet there is a long history of attackers exploiting poor randomness, or even creating it – as when the NSA put backdoors in random number generators to break cryptography. In this paper we consider whether attackers can compromise an ML system using only the randomness on which they commonly rely. We focus our effort on Randomised Smoothing, a popular approach to train certifiably robust models, and to certify specific input datapoints of an arbitrary model. We choose Randomised Smoothing since it is used for both security and safety – to counteract adversarial examples and quantify uncertainty respectively. Under the hood, it relies on sampling Gaussian noise to explore the volume around a data point to certify that a model is not vulnerable to adversarial examples. We demonstrate an entirely novel attack against it, where an attacker backdoors the supplied randomness to falsely certify either an overestimate or an underestimate of robustness. We demonstrate that such attacks are possible, that they require very small changes to randomness to succeed, and that they can be hard to detect. As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it. We advocate updating the NIST guidelines on random number testing to make them more appropriate for safety-critical and security-critical machine-learning applications.

READ FULL TEXT

page 6

page 20

research
09/17/2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

As machine learning (ML) becomes more and more powerful and easily acces...
research
05/07/2019

Machine Learning Cryptanalysis of a Quantum Random Number Generator

Random number generators (RNGs) that are crucial for cryptographic appli...
research
03/03/2020

Analyzing Accuracy Loss in Randomized Smoothing Defenses

Recent advances in machine learning (ML) algorithms, especially deep neu...
research
10/02/2019

Generating Semantic Adversarial Examples with Differentiable Rendering

Machine learning (ML) algorithms, especially deep neural networks, have ...
research
08/23/2023

SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks

Machine Learning (ML) systems are vulnerable to adversarial examples, pa...
research
09/19/2019

Adversarial Vulnerability Bounds for Gaussian Process Classification

Machine learning (ML) classification is increasingly used in safety-crit...
research
02/09/2020

Random family method: Confirming inter-generational relations by restricted re-sampling

Randomness is one of the important key concepts of statistics. In epidem...

Please sign up or login with your details

Forgot password? Click here to reset