DeepAI AI Chat
Log In Sign Up

The Human Visual System and Adversarial AI

by   Yaoshiang Ho, et al.

This paper introduces existing research about the Human Visual System into Adversarial AI. To date, Adversarial AI has modeled differences between clean and adversarial examples of images using L1, L2, L0, and L-infinity norms. These norms have the benefit of easy mathematical explanation and distinctive visual representations when applied to images in the context of Adversarial AI. However, in prior decades, other existing areas of image processing have moved beyond easy mathematical models like Mean Squared Error (MSE) towards models that factor in more understanding of the Human Visual System (HVS). We demonstrate a proof of concept of incorporating HVS into Adversarial AI, and hope to spark more research into incorporating HVS into Adversarial AI.


page 2

page 3

page 5

page 6

page 7


The tension between openness and prudence in AI research

This paper explores the tension between openness and prudence in AI rese...

Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples

Diffusion Models (DMs) achieve state-of-the-art performance in generativ...

Perceptually Optimizing Deep Image Compression

Mean squared error (MSE) and ℓ_p norms have largely dominated the measur...

Explainable AI for Natural Adversarial Images

Adversarial images highlight how vulnerable modern image classifiers are...

On the Suitability of L_p-norms for Creating and Preventing Adversarial Examples

Much research effort has been devoted to better understanding adversaria...

AI Data poisoning attack: Manipulating game AI of Go

With the extensive use of AI in various fields, the issue of AI security...

Can you fool AI with adversarial examples on a visual Turing test?

Deep learning has achieved impressive results in many areas of Computer ...