Simple Transparent Adversarial Examples

05/20/2021
by   Jaydeep Borkar, et al.
16

There has been a rise in the use of Machine Learning as a Service (MLaaS) Vision APIs as they offer multiple services including pre-built models and algorithms, which otherwise take a huge amount of resources if built from scratch. As these APIs get deployed for high-stakes applications, it's very important that they are robust to different manipulations. Recent works have only focused on typical adversarial attacks when evaluating the robustness of vision APIs. We propose two new aspects of adversarial image generation methods and evaluate them on the robustness of Google Cloud Vision API's optical character recognition service and object detection APIs deployed in real-world settings such as sightengine.com, picpurify.com, Google Cloud Vision API, and Microsoft Azure's Computer Vision API. Specifically, we go beyond the conventional small-noise adversarial attacks and introduce secret embedding and transparent adversarial examples as a simpler way to evaluate robustness. These methods are so straightforward that even non-specialists can craft such attacks. As a result, they pose a serious threat where APIs are used for high-stakes applications. Our transparent adversarial examples successfully evade state-of-the art object detections APIs such as Azure Cloud Vision (attack success rate 52 90 vision of time-limited humans but is detected by Google Cloud Vision API's optical character recognition. Complementing to current research, our results provide simple but unconventional methods on robustness evaluation.

READ FULL TEXT

page 4

page 8

page 10

page 11

page 13

research
01/04/2019

Adversarial Examples versus Cloud-based Detectors: A Black-box Empirical Study

Deep learning has been broadly leveraged by major cloud providers such a...
research
11/17/2019

Countering Inconsistent Labelling by Google's Vision API for Rotated Images

Google's Vision API analyses images and provides a variety of output pre...
research
05/08/2019

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

Neural networks are known to be vulnerable to carefully crafted adversar...
research
06/19/2019

Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

Many recent works demonstrated that Deep Learning models are vulnerable ...
research
02/08/2020

Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks

Optical character recognition (OCR) is widely applied in real applicatio...
research
01/07/2023

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

Encoder as a service is an emerging cloud service. Specifically, a servi...
research
03/26/2017

Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos

Despite the rapid progress of the techniques for image classification, v...

Please sign up or login with your details

Forgot password? Click here to reset