Adversarial attacks against Bayesian forecasting dynamic models

10/20/2021
by   Roi Naveiro, et al.
0

The last decade has seen the rise of Adversarial Machine Learning (AML). This discipline studies how to manipulate data to fool inference engines, and how to protect those systems against such manipulation attacks. Extensive work on attacks against regression and classification systems is available, while little attention has been paid to attacks against time series forecasting systems. In this paper, we propose a decision analysis based attacking strategy that could be utilized against Bayesian forecasting dynamic models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

Targeted Attacks on Timeseries Forecasting

Real-world deep learning models developed for Time Series Forecasting ar...
research
07/19/2022

Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms

As deep learning models have gradually become the main workhorse of time...
research
03/06/2020

Automatic Generation of Adversarial Examples for Interpreting Malware Classifiers

Recent advances in adversarial attacks have shown that machine learning ...
research
03/29/2023

Targeted Adversarial Attacks on Wind Power Forecasts

In recent years, researchers proposed a variety of deep learning models ...
research
12/08/2020

Predicting seasonal influenza using supermarket retail records

Increased availability of epidemiological data, novel digital data strea...
research
03/08/2020

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

We develop an effective generation of adversarial attacks on neural mode...
research
06/22/2020

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

Data poisoning and backdoor attacks manipulate training data in order to...

Please sign up or login with your details

Forgot password? Click here to reset