Provable Robustness for Streaming Models with a Sliding Window

03/28/2023
by   Aounon Kumar, et al.
0

The literature on provable robustness in machine learning has primarily focused on static prediction problems, such as image classification, in which input samples are assumed to be independent and model performance is measured as an expectation over the input distribution. Robustness certificates are derived for individual input instances with the assumption that the model is evaluated on each instance separately. However, in many deep learning applications such as online content recommendation and stock market analysis, models use historical data to make predictions. Robustness certificates based on the assumption of independent input samples are not directly applicable in such scenarios. In this work, we focus on the provable robustness of machine learning models in the context of data streams, where inputs are presented as a sequence of potentially correlated items. We derive robustness certificates for models that use a fixed-size sliding window over the input stream. Our guarantees hold for the average model performance across the entire stream and are independent of stream size, making them suitable for large data streams. We perform experiments on speech detection and human activity recognition tasks and show that our certificates can produce meaningful performance guarantees against adversarial perturbations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2022

Certifying Model Accuracy under Distribution Shifts

Certified robustness in machine learning has primarily focused on advers...
research
04/16/2019

Almost-Smooth Histograms and Sliding-Window Graph Algorithms

We study algorithms for the sliding-window model, an important variant o...
research
07/30/2021

A Framework for Adversarial Streaming via Differential Privacy and Difference Estimators

Streaming algorithms are algorithms for processing large data streams, u...
research
09/22/2021

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

In safety-critical machine learning applications, it is crucial to defen...
research
02/04/2023

Certified Robust Control under Adversarial Perturbations

Autonomous systems increasingly rely on machine learning techniques to t...
research
12/08/2018

Counting Butterfies from a Large Bipartite Graph Stream

We consider the estimation of properties on massive bipartite graph stre...

Please sign up or login with your details

Forgot password? Click here to reset