Should Adversarial Attacks Use Pixel p-Norm?

06/06/2019 ∙ by Ayon Sen, et al. ∙ University of Wisconsin-Madison 0

Adversarial attacks aim to confound machine learning systems, while remaining virtually imperceptible to humans. Attacks on image classification systems are typically gauged in terms of p-norm distortions in the pixel feature space. We perform a behavioral study, demonstrating that the pixel p-norm for any 0< p <∞, and several alternative measures including earth mover's distance, structural similarity index, and deep net embedding, do not fit human perception. Our result has the potential to improve the understanding of adversarial attack and defense strategies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 12

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adversarial (test-time) attacks perturb an input item slightly, forming such that (1)

is classified differently than

; (2) the change from to is small. The oft-quoted reason for (2) is to make the attack hard to detect szegedy2013intriguing ; goodfellow6572explaining ; moosavi2017universal ; carlini2017adversarial . This assumes an “inspector”, who detects suspicious items before sending them to the classifier salamati2019perception . Our paper focuses on (2) in the context of image classification attacks where the inspector is a human. We ask the question: are current measures of “small change” adequate to characterize visual detection by a human inspector? Answering this question is directly relevant to the efficacy of adversarial learning research.

Figure 1: Schematic diagram of mismatch between human perception and pixel -norm
Figure 2: Variability of fit to human data, lower is better. See section 6 for details and discussion

We start with a popular measure: the pixel -norm. Perturbations lying inside the norm ball (see Figure 2) are assumed to be imperceptible. However, even with the optimal norm and radius , there can be a mismatch between what an average human perceives as small changes to (schematic green area, which may not be in the -norm ball family) and the pixel

-norm ball (gray). Without knowledge of human perceptual behavior, adversarial machine learning can’t accurately attack

or defend: adversaries cannot predict which attacks will succeed, and defenders don’t know what is most important to look for. For an adversary, false acceptances like produce futile attacks because the human inspector readily detects the attack, while false rejections like lead to a false sense of security: one may presume the attack at can never escape detection due to its large -norm, while in reality the attack will pass the human inspector. The need for human perception knowledge is urgent: Out of 32 papers we surveyed, 27 papers (each with over citations) used pixel -norms in attacks. Among these 27, 20% assumed -norms are a good match to human perception without providing evidence; 50% used them because other papers did; and the rest used them without justification.

Such knowledge requires multiple interdisciplinary studies in adversarial machine learning and cognitive science. The seminal work by Sharif et al. performed a behavioral study on adversarial attack and human perception Sharif2018Bright . They showed that humans may categorize two perturbed thumbnails – of the same pixel -norm (for ) distance to the original thumbnail – differently. While valuable, their conclusions are limited due to the study design: they only tested pixel -, -, -norms but not other -norms or measures. Their test also required knowledge of the radius , and depended on humans (mis)-categorizing a low resolution thumbnail (MNIST lecun1998mnist , CIFAR10 krizhevsky2009learning ), which does not reflect humans’ ability to notice small changes in a normal-sized image well before humans’ categorization on that image changes.

Our work significantly extends and complements Sharif2018Bright , and addresses all these issues: Our design enables us to test all pixel

-norms, earth mover’s distance, structural similarity (SSIM), and deep neural network representation. It is also agnostic to the true value of

by using the notion of human just-noticeable-difference. We test humans in small image-change regimes that better match what a human inspector typically faces in an adversarial setting. Our main results caution against the use of pixel -norms, earth mover’s distance, structural similarity, or deep neural network representation to define “small changes” in adversarial attacks. In addition, we give quality of approximation for different measures. For instance, we experimentally determined (see Figure 2) that pixel 3-norm is the best approximation to human data among the measures studied, see section 6 for details. Our results have the potential to improve the understanding of adversarial attack and defense strategies.

We also mention some limitations of our work. We cannot directly answer “what is the correct measure”, because computationally modeling human visual perception is still an open question in psychology simons2005change ; rensink2002change ; wolfe2010visual ; itti2001computational . We used a “show then perturb” experiment paradigm, while in real applications the human inspector may not have access to . We also limit ourselves to the visual domain. These topics remain future work.

2 The pixel -norm central hypothesis and its implications

Let the pixel feature space be , where equals the number of pixels times the number of color channels (it is straightforward to generalize to color depth other than 255). Consider a natural image and another image . The pixel -norm for any measures the amount of perturbation by . We define the 0-norm to be the number of nonzero elements. To facilitate mathematical exposition in this section, we posit an “ideal observer” who has population median human perception. Natural variations in real human observers will be handled in section 4. The central hypothesis of pixel -norm is the following.

Definition 1 (The Central Hypothesis).

, threshold , such that the ideal observer perceives any the same as if , and the ideal observer notices the difference if .

The threshold is known as the “Just Noticeable Difference” (JND) in experimental psychology fechner1966elements ; zhang2008just . We further define the set of Just-Noticeably-Different images with respect to under the central hypothesis: . In other words, is the shell of the norm ball centered at with radius . A main task of the present paper is to test the central hypothesis. To this end, we derive a number of testable implications of the central hypothesis. These implications will be tested through human behavioral experiments in later sections. The first implication follows trivially from the definition of . It states that any Just-Noticeably-Different images of an has the same -norm (note: does not require knowledge of ):

Implication 1.

Suppose is the correct norm for the central hypothesis. Then .

The second implication is more powerful in the sense that it can be tested without knowing the true parameter or . To do so, we utilize special perturbed as follows. As indicated in Figure 2, we consider generated along the ray defined by a perturbation direction with a perturbation scale : . Here is the projection onto ; namely, clipping values to and rounding to integers. Note that as increases, the perturbation becomes stronger. The perturbation direction is important: in our experiments some directions are generated by popular adversarial attacks in the literature, while others are designed to facilitate statistical tests. Specifically, we define -perturbation directions as any with the following two properties: (i) Its support (nonzero elements) has cardinality ; in many cases will be sparse with ; (ii) the nonzero elements are either 1 or -1 depending on the value of the corresponding element in : if , and -1 otherwise. For -perturbations and integer it is easy to see that the projection is not needed: . This allows for convenient experiment design. More importantly, for such -perturbed images any pixel -norm has a simple form: . Implication 2 states that two just-noticeable perturbed images with the same perturbation sparsity should have the same perturbation scale . Importantly, it can be tested without knowing or . If it fails then no pixel -norm is appropriate to model human perceptions of just-noticeable-difference.

Implication 2.

, , -perturbation directions with the same sparsity , suppose such that and . Then .

3 Behavioral experiment design

We conducted a human behavioral experiment under Institutional Review Board (IRB) approval. We release all behavioral data, and the code that produces the plots and statistical tests in this paper, to the public for reproducibility and further research at http://www.cs.wisc.edu/~jerryzhu/pub/advMLnorm/. The figures below are best viewed by zooming in to replicate the participant experience.

Center images and perturbation directions

: We chose three natural images (from the Imagenet dataset 

deng2009imagenet ) popular in adversarial research: a panda goodfellow6572explaining , a macaw moosavi2016deepfool and a cat athalye2018obfuscated as in our experiment. We resized the images to to match the input dimension of the Inception V3 image classification network szegedy2016rethinking . For each natural image we considered 10 perturbation directions , see Figure 3. Eight are specially crafted -perturbation directions varying in three attributes, and further explained in the caption of Figure 3:

# Dimensions Changed (s) Color Channels Affected Shape of Perturbed Pixels
S = 1, M = 288 Red = only the red channel of a pixel Box = a centered rectangle
L = 30603, X = 268203 RGB = all three channels of a pixel Dot = scattered random dots
(mnemonic: garment size) Eye = on the eye of the animal

The remaining two perturbation directions are adversarial directions. We used Fast Gradient Sign Method (FGSM) goodfellow6572explaining and Projected Gradient Descent (PGD) madry2017towards to generate two adversarial images for each , with Inception V3 as the victim network. All attack parameters are set as suggested in the methods’ respective papers. PGD is a directed attack and requires a target label; we choose gibbon (on panda) and guacamole (on cat) following the papers, and cleaver (on macaw) arbitrarily. We then define the adversarial perturbation directions by and . We use the factor based on a pilot study to ensure that changes between consecutive images in the adversarial perturbation directions are not too small or too big.

(a) (b)

(c) (d)

(e) (f)

(g) (h)

(i) (j)

Figure 3: All 10 perturbation directions with severe perturbation scale . (a) S_Red_Box: the red channel of the center pixel. (b) S_Red_Dot: a randomly selected red channel. (c) M_Red_Dot: 288 randomly selected red channels. (d) M_RGB_Dot: all three color channels of 96 randomly selected pixels (). (e) M_Red_Eye: 288 red channels around the eyes of the animals. (f) M_RGB_Box: all colors of a centered rectangle. (g) L_RGB_Box: all colors of a centered rectangle. (h) X_RGB_Box: all dimensions. (i) FGSM. (j) PGD.
Figure 4: Experiment procedure. The green, red and blue cells denote -perturbation, adversarial, and guard trials, respectively. The letters P, M and C denote the panda, macaw and cat , respectively.

Experimental procedure: See Figure 4. Each participant was first presented with instructions and then completed a sequence of trials, of which 30 were -perturbation or adversarial trials, and 4 were guard trials. The order of these trials was randomized then fixed (see figure). During each trial the participants were presented with an image . They were instructed to increase (decrease) perturbations to this image by using right / left arrow keys or buttons. Moving right (left) incremented (decremented) by 1, and the subject was then presented with the new perturbed image . We did not divulge the nature of the perturbations beforehand, nor the current perturbation scale the participant had added to at any step of the trial. The participants were instructed to submit the perturbed image when they think it became just noticeably different from the original image . The participants had to hold in memory, though they could also go all the way left back to see again. We hosted the experiment using the NEXT platform jamieson2015next ; sievert2017next .

In a -perturbation trial, the perturbation direction is one of the eight -perturbations. We allowed the participants to vary within to avoid value cropping. If a participant was not able to detect any change even after , then they were encouraged to “give up” (see figure).

In an adversarial trial, the perturbation direction is or . We allowed the participants to increment indefinitely, though no one went beyond , see Figure 5.

The guard trials were designed to filter out participates who clicked through the experiment without performing the task. In a guard trial, we showed a novel fixed natural image (not panda, macaw or cat) for . Then for , a highly noisy version of that image is displayed. An attentive participant should readily notice this sudden change at and submit it. In our main analyses, we disregarded the guard trials.

Participants and data inclusion criterion: We enrolled 68 participants using Amazon Mechanical Turk buhrmester2011amazon master workers. A master worker is a person who has consistently displayed a high degree of success in performing a wide range of tasks. All participants used a desktop, laptop or a tablet device; none used a mobile device where the screen would be too small. On average the participants took minutes to finish the experiment. Each participant was paid . As mentioned before, we use guard trials to identify inattentive participants. While the change happens at exactly in a guard trial, our data indicates a natural spread in participant submissions around 20 with sharp decays. We speculate that the spread was due to keyboard / mouse auto repeat. We set a range for an acceptable guard trial if a participant submitted . A participant is deemed inattentive if any one of the four guard trials was outside the acceptable range. Only out of 68 participants survived this stringent inclusion condition. All our analyses below are on these 42 participants.

To summarize the data: on each combination of natural image and perturbation direction , the participants gave us their individual perturbation scale . That is, the image is the one participant thinks has just-noticeable-difference to . We will call these human JND images. We present box plots of the data in Figure 5. The perturbation directions

are indicated on the x-axis. The box plots (left y-axis) show the median, quartiles, and outliers of the participants’ perturbation scale

.

Because our participants can sometimes choose to “give up” if they did not notice a change, we have right censored data on . All we know from a give-up trial is that , but not what larger value will cause the participant to noticed a difference. In Figure 5 the blue bars (right y-axis) show the number of participants who chose to “give up”. Not surprisingly, many participants failed to notice a difference along the S_Red_Box and S_Red_Dot perturbation directions. Because of the presence of censored data, in later sections we often employ the Kolmogorov-Smirnov test which is a nonparametric test of distribution that can incorporate the censored data. There are 9 tests including the appendix. To achieve a paper-wide significance level of e.g. , we perform Bonferroni correction for multiple tests leading to individual test level .

Figure 5: Summary of data for panda, macaw, cat, respectively

4 Pixel -norms do not match human perception

4.1 Humans probably do not use pixel 0-norm, 1-norm, 2-norm, or -norm

Let us start by assuming humans use pixel 1-norm, i.e. . Implication 1 suggests the following procedure: for all original images , for all perturbation directions , perturb along these two directions separately until the images each become just noticeable to the ideal observer. Denote and on the two resulting images . Then we have . Conversely, if the equality does not hold on even one triple (, ), then implication 1 with , and consequently the central hypothesis with , will be refuted.

Of course, we do not have the ideal observer. Instead, we have participants from the population. Starting from along perturbation direction , the th participant identifies their own just-noticeably-different image . Under this produces a number . The numbers from all participants form a sample (there can be identical values). Similarly, denote the sample for direction by . Figure 6(left) shows a box plot for . If implication 1 with were true, the medians (orange lines) would be at about the same height within the plot. Qualitatively this is not the case: the median for is but the median for is merely . We perform a statistical test.

Hypothesis test 1.

The null hypothesis

is: and have the same distribution, where , and . A two-sample Kolmogorov-Smirnov (KS) test on our data () yields a p-value , rejecting .

Exactly the same reasoning applies if we assume humans use pixel 2-norm or -norm. Figure 6(center) shows 2-norm on , where has median but has median ; (right) shows -norm on , where has median but has median . The full plots are in appendix Figure 11.

Hypothesis test 2.

: and have the same distribution, where , and . KS test yields p-value , rejecting .

Hypothesis test 3.

: and have the same distribution, where , and . KS test yields p-value , rejecting .

Finally, is refuted by noticing in Figure 5 that in the M, L, X directions have vastly different 0-norms, yet each direction has its own nonzero scale that induces human JND. This contradicts with implication 1 with , which predicts changes are never noticeable below a 0-norm threshold, and always noticeable above it, regardless of . Taken together, we have rejected implication 1 with , or

. This suggests that humans probably do not use pixel

-, -, -, or -norm when they judge if a perturbed image is different from its original.

Figure 6: Participant JND ’s pixel norm . If the central hypothesis were true, one expects a plot to have similar medians (orange lines).

4.2 Humans probably do not use any pixel -norm

But what if humans use some other -norm in ? Implication 1 requires a specific to test, which is not convenient. Instead, we now test implication 2 whose failure can refute any . We take and look at the two perturbation directions and . These two perturbations have the same sparsity . Therefore, implication 2 predicts that the scales , to reach just-noticeable-difference should be the same. However, the perturbation directions differ in their “shape of support”: M-Red-Dot changes random pixels, while M-Red-Eye changes pixels of the eye region which presumably humans pay attention to and thus detect earlier. On perturbation direction , our participants produced scales ; similarly, for the other direction , they produced . See Figure 5(right) for the human behaviors: the median scale is 62 and 37, respectively, as we suspected.

Hypothesis test 4.

: Human JND and have the same distribution, where , and . KS test yields p-value , rejecting .

We report more statistical tests in the appendix that further refute this and other implications. Taken together, these results indicate that pixel -norms are not a good fit for human behaviors regardless of . There are probably other perceptual attributes that are important to humans which are unaccounted for by pixel -norms.

5 Some measures other than pixel -norms

We seek an alternative distance function (does not need to be a metric) that matches human perception. That is, for human JND images in , ideally satisfies .

1-SSIM, DNN, EMD,
Figure 7: Box plots of different measures on human JND images.

Earth mover’s distance

(EMD), also known as Wasserstein distance, is a distance function defined between two probability distributions on a given metric space. The metric computes the minimum cost of converting one distribution to the other one. EMD has been used as a distance metric in the image space also, e.g. for image retrieval 

rubner2000earth . Given two images and , EMD is calculated as Here,

is the set of joint distributions whose marginals are

and (treated as histograms), respectively. We test which assumes that the same amount of earth moving in images corresponds to the same detectability by human perception. Figure 7(right) shows the box plots of human JND images along different perturbation directions for (the full plots are in appendix Figure 12). It is immediately clear that on perturbation direction X-RGB-Box humans need to move a lot more earth as measured by EMD before they perceive the image difference. These should not happen: ideally human JND should occur at the same value. The following test implies EMD probably should not be used to define adversarial attack detectability.

Hypothesis test 5.

: Human JND images’ on directions , for have the same distribution. KS test yields p-value , rejecting .

Structural Similarity (SSIM) is intended to be a perceptual similarity measure that quantifies image quality loss due to compression wang2004image , and used as a signal fidelity measure with respect to humans in multiple research works wang2009mean ; sheikh2006statistical . SSIM has three elements: luminance, contrast and similarity of local structure. Given two images and , SSIM is defined by and are the sample means; , and

are the standard deviation and sample cross correlation of

and (after subtracting the mean) respectively. To compute SSIM we use window size without Gaussian weights. Since SSIM is a similarity score, we define . Figure 7(left) shows the box plots of of our participant data for (The full plot is in appendix Figure 13). The following test implies 1 - SSIM probably should not be used to define adversarial attack detectability.

Hypothesis test 6.

: Human JND on directions , for have the same distribution. KS test yields p-value , rejecting .

Deep neural network (DNN) representation. Even though DNNs are designed with engineering goals in mind, studies comparing their internal representations to primate brains have found similarities kriegeskorte2015deep . Let denote the last hidden layer representation of input image in a DNN. We may define as a potential distance metric for our purpose. We use Inception V3 representations with . Figure 7(center) shows the box plots of human JND images’ DNN 2-norm along different perturbation directions for . The full plot for DNN norms and all animals is in appendix Figure 14. Interestingly, the human JND images along the adversarial perturbation directions (FGSM and PGD) have much larger DNN -norm than the perturbation directions. As an example, , human JND images have median DNN 2-norm 1.8, while human JND images have median 13.6. The following test implies that -norm on DNN representation probably should not be used to define adversarial attack detectability.

Hypothesis test 7.

: Human JND images’ DNN 2-norm along and for have the same distribution. KS test yields p-value , rejecting .

6 But which measure is a better approximation?

We emphasize that our human experiments do not support pixel -norm, EMD, 1 - SSIM, or DNN representation as the correct measure . Nonetheless, some of them may be useful as computational approximations to human perception. As such, does our data suggest which measure offers the best approximation? While none of the measures exactly satisfies , the equality inspires the following idea: the best measure should minimize the standard deviation of over all human JND images . This is because would have been a constant if the equality were true. However, different measures have vastly different scales (e.g. for

-norms alone, the all-1 vector in

has 1-norm , 2-norm , and -norm 1), making a direct comparison difficult. Instead, we normalize by the center image in order to find the best approximation: where is the zero vector. The standard deviation is taken over all our human experiment data for a particular center image , pooling all participants and all perturbation directions together, excluding “give ups”. Figure 2 shows of different measures. For pixel -norms this is presented as a function of ; EMD and 1 - SSIM are constant lines; and DNN has values larger than 0.12 for all and thus not shown (see appendix Figure 15 with DNN).

Interestingly, by this criterion the pixel 3-norm is the best approximation of human JND judgment among the tested measures. We plot of the human JND ’s in Figure 8. Compared to pixel 1, 2, and norms in Figure 6, EMD, 1 - SSIM, and DNN -norm in Figure 7, and their full plots in the appendix, the median of pixel 3-norm (orange lines) are closer to having the same height. This qualitatively supports pixel 3-norm as a better approximation than the other measures.

Figure 8: Participants JND pixel 3-norm for panda, macaw, and cat, respectively.

7 Conclusion

Our behavioral experiment suggests that pixel -norms, EMD, 1 - SSIM, and DNN representation -norms do not match how humans judge just-noticeably-different images. Even though pixel 3-norm is the closest approximation we tested, Figure 8 still contains significant variability. Future research is needed to identify better measures of cognitive response to image distortion, and to generalize our work to other domains such as audio and text.

Acknowledgments

The authors would like to thank Po-Ling Loh and Tim Rogers for helpful discussions. This work is supported in part by NSF 1545481, 1561512, 1623605, 1704117, 1836978, the MADLab AF Center of Excellence FA9550-18-1-0166, American Family Insurance, and the University of Wisconsin.

References

  • (1) Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
  • (2) Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. Amazon’s mechanical turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1):3–5, 2011.
  • (3) Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In

    Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security

    , pages 3–14. ACM, 2017.
  • (4) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 248–255. Ieee, 2009.
  • (5) Gustav Theodor Fechner, Edwin Garrigues Boring, Davis H Howes, and Helmut E Adler. Elements of Psychophysics. Translated by Helmut E. Adler. Edited by Davis H. Howes And Edwin G. Boring, With an Introd. by Edwin G. Boring. Holt, Rinehart and Winston, 1966.
  • (6) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572, 2014.
  • (7) Laurent Itti and Christof Koch. Computational modelling of visual attention. Nature Reviews: Neuroscience, 2(3):194, 2001.
  • (8) Kevin G Jamieson, Lalit Jain, Chris Fernandez, Nicholas J. Glattard, and Rob Nowak.

    Next: A system for real-world development, evaluation, and application of active learning.

    In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2656–2664. Curran Associates, Inc., 2015.
  • (9) Nikolaus Kriegeskorte. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual Review of Vision Science, 1:417–446, 2015.
  • (10) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  • (11) Yann LeCun.

    The mnist database of handwritten digits.

    http://yann.lecun.com/exdb/mnist/, 1998.
  • (12) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
  • (13) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1765–1773, 2017.
  • (14) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
  • (15) Ronald A Rensink. Change detection. Annual Review of Psychology, 53(1):245–277, 2002.
  • (16) Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99–121, 2000.
  • (17) Mahmoud Salamati, Sadegh Soudjani, and Rupak Majumdar. Perception-in-the-loop adversarial examples. arXiv preprint arXiv:1901.06834, 2019.
  • (18) Mahmood Sharif, Lujo Bauer, and Michael K. Reiter. On the suitability of lp-norms for creating and preventing adversarial examples. In The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CVPR Workshop), 2018.
  • (19) Hamid R Sheikh, Muhammad F Sabir, and Alan C Bovik. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 15(11):3440–3451, 2006.
  • (20) Scott Sievert, Daniel Ross, Lalit Jain, Kevin Jamieson, Rob Nowak, and Robert Mankoff. Next: A system to easily connect crowdsourcing and adaptive data collection. In Proceedings of the 16th Python in Science Conference, pages 113–119, 2017.
  • (21) Daniel J Simons and Michael S Ambinder. Change blindness: Theory and consequences. Current Directions in Psychological Science, 14(1):44–48, 2005.
  • (22) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
  • (23) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • (24) Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Processing Magazine, 26(1):98–117, 2009.
  • (25) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on Image Processing, 13(4):600–612, 2004.
  • (26) Jeremy M Wolfe. Visual search. Current Biology, 20(8):R346–R349, 2010.
  • (27) Xiaohui Zhang, Weisi Lin, and Ping Xue.

    Just-noticeable difference estimation with pixels in images.

    Journal of Visual Communication and Image Representation, 19(1):30–41, 2008.

Appendix A Supplemental materials

a.1 Further statistical tests

We also take and look at the two perturbation directions and . The directions again have the same sparsity; this time they also share the same “shape of support”: the nonzero elements of are both randomly scattered over pixels. The difference is that changes only the red color channel on 288 random pixels, while changes all three channels but only on random pixels. Implication 2 again predicts that the humans should reach just-noticeable-difference at the same scale. But Figure 5(left) suggest that humans are more sensitive to simultaneous changes to all RGB channels: scales , have median 57.5 and 31, respectively.

Hypothesis test 8.

The null hypothesis is: The sample generated from , and the sample generated from , come from the same distribution. A two-sample Kolmogorov-Smirnov test on our data () yields a p-value , rejecting .

These two tests refute implication 2. They already indicate that no can make the central hypothesis true.

While implication 2 focuses on perturbation directions of the same sparsity, the next implication states that if one perturbation is changing more dimensions than the other, it should achieve just noticeable difference with a smaller perturbation scale. Again, this is true for all .

Implication 3.

, -perturbation directions with sparsity , suppose such that and . Then .

To further strengthen our case, we also test implication 3 which states that it is easier to notice changes if the perturbation has larger support . This is mostly true as seen in Figure 5: the median of generally decreases as support size increases in the order of S, M, L, X. However, there is a curious inversion on , vs : Implication 3 predicts that , but human behaviors have mean 6.5 and 9.9 (and median 6 and 9.5), respectively: the other way around. The human data for these perturbations are not censored; Figure 5 also suggests they are close to normal in distribution. We therefore perform a one-tailed two-sample

-test with unequal variances.

Hypothesis test 9.

The null hypothesis is: Human JND scales generated from , has equal mean as those generated from , . The left-tailed alternative hypothesis is: the former has a smaller mean. A one-tailed two-sample -test with unequal variances on our data yields a p-value of , rejecting and retaining .

The test suggests that the inversion is statistically significant, thus refuting implication 3. We speculate that the inversion is due to the black-and-white panda making the L box boundary more prominent, see Figure 3(g).

a.2 Additional figures

(a) panda
(b) macaw
(c) cat
Figure 9: The three natural images
Figure 10: Photos used for guard trials along with their distorted version
panda macaw cat
Figure 11: Participant JND ’s pixel norm for (top row), 2 (middle row), (bottom row). Within a plot, each vertical box is for a perturbation direction . The box plot depicts the median, quartiles, and outliers. If the central hypothesis were true, one expects a plot to have similar medians (orange lines).
(a) panda
(b) macaw
(c) cat
Figure 12: Box plots of Earth Mover’s Distance on human JND images. Recall for each natural image and each perturbation direction , our participants decided which image is JND to them, for . We compute and show them as a box plot. Doing so for all our perturbation directions and all natural images produces this figure.
(a) panda
(b) macaw
(c) cat
Figure 13: Box plots of 1 - SSIM on human JND images.
panda macaw cat
Figure 14: Box plots of DNN on human JND images. rows: , respectively.
Figure 15: Variability of fit to human data including DNN representations, lower is better

a.3 Amazon Mechanical Turk instructions

For reference, screenshots of the instructions displayed to participants are included in this appendix.

Figure 16: Instruction Page 1
Figure 17: Instruction Page 2
Figure 18: Instruction Page 2 (cont’d)
Figure 19: Instruction Page 2 (cont’d)
Figure 20: Instruction Page 3