Adversarial attacks hidden in plain sight

02/25/2019
by   Jan Philip Göpfert, et al.
20

Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. We underline the severity of the issue by presenting a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset