Large image datasets: A pyrrhic win for computer vision?

06/24/2020 ∙ by Vinay Uday Prabhu, et al. ∙ UnifyID Inc 15

In this paper we investigate problematic practices and consequences of large scale vision datasets. We examine broad issues such as the question of consent and justice as well as specific concerns such as the inclusion of verifiably pornographic images in datasets. Taking the ImageNet-ILSVRC-2012 dataset as an example, we perform a cross-sectional model-based quantitative census covering factors such as age, gender, NSFW content scoring, class-wise accuracy, human-cardinality-analysis, and the semanticity of the image class information in order to statistically investigate the extent and subtleties of ethical transgressions. We then use the census to help hand-curate a look-up-table of images in the ImageNet-ILSVRC-2012 dataset that fall into the categories of verifiably pornographic: shot in a non-consensual setting (up-skirt), beach voyeuristic, and exposed private parts. We survey the landscape of harm and threats both society broadly and individuals face due to uncritical and ill-considered dataset curation practices. We then propose possible courses of correction and critique the pros and cons of these. We have duly open-sourced all of the code and the census meta-datasets generated in this endeavor for the computer vision community to build on. By unveiling the severity of the threats, our hope is to motivate the constitution of mandatory Institutional Review Boards (IRB) for large scale dataset curation processes.



There are no comments yet.


page 4

Code Repositories


Machine Learning Course (INFO 697-03) - Fall 2020

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Born from World War II and the haunting and despicable practices of Nazi era experimentation [4] the 1947 Nuremberg code [108] and the subsequent 1964 Helsinki declaration [34], helped establish the doctrine of Informed Consent which builds on the fundamental notions of human dignity and agency to control dissemination of information about oneself. This has shepherded data collection endeavors in the medical and psychological sciences concerning human subjects, including photographic data [71, 8], for the past several decades. A less stringent version of informed consent, broad consent, proposed in 45 CFR 46.116(d) of the Revised Common Rule [27], has been recently introduced that still affords the basic safeguards towards protecting one’s identity in large scale databases. However, in the age of Big Data, the fundamentals of informed consent, privacy, or agency of the individual have gradually been eroded. Institutions, academia, and industry alike, amass millions of images of people without consent and often for unstated purposes under the guise of anonymization. These claims are misleading given there is weak anonymity and privacy in aggregate data in general [72] and more crucially, images of faces are not the type of data that can be aggregated. As can be seen in Table 1, several tens of millions of images of people are found in peer-reviewed literature. These images are obtained without consent or awareness of the individuals or IRB approval for collection. In Section 5-B of [103], for instance, the authors nonchalantly state “As many images on the web contain pictures of people, a large fraction (23%) of the 79 million images in our dataset have people in them”. With this background, we now focus on one of the most celebrated and canonical large scale image datasets: the ImageNet dataset. From the questionable ways images were sourced, to troublesome labeling of people in images, to the downstream effects of training AI models using such images, ImageNet and large scale vision datasets (LSVD) in general constitute a Pyrrhic win for computer vision. We argue, this win has come at the expense of harm to minoritized groups and further aided the gradual erosion of privacy, consent, and agency of both the individual and the collective.

1.1 ImageNet: A brief overview

Number of images
(in millions)
Number of
(in thousands)
Number of
JFT-300M ([54]) 300+ 18 0
Open Images ([63]) 9 20 0
Tiny-Images ([103]) 79 76 0
Tencent-ML ([113]) 18 11 0
ImageNet-(21k,11k1,1k) ([90]) (14, 12, 1) (22, 11, 1) 0
Places ([117]) 11 0.4 0
Table 1: Large scale image datasets containing people’s images

The emergence of the ImageNet dataset [24]

is widely considered a pivotal moment

111“The data that transformed AI research—and possibly the world”:

in the Deep Learning revolution that transformed Computer Vision (CV), and Artificial Intelligence (AI) in general. Prior to ImageNet, computer vision and image processing researchers trained image classification models on small datasets such as CalTech101 (9k images), PASCAL-VOC (30k images), LabelMe (37k images), and the SUN (131k images) dataset (see slide-37 in

[64]). ImageNet, with over 14 million images spread across 21,841 synsets, replete with 1,034,908 bounding box annotations, brought in an aspect of scale that was previously missing. A subset of 1.2 million images across 1000 classes was carved out from this dataset to form the ImageNet-1k dataset (popularly called ILSVRC-2012) which formed the basis for the Task-1: classification challenge in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). This soon became widely touted as the Computer Vision Olympics222

. The vastness of this dataset allowed a Convolutional Neural Network (CNN) with 60 million parameters

[62] trained by the SuperVision team from University of Toronto to usher in the rebirth of the CNN-era (see [2]), which is now widely dubbed the AlexNet moment in AI.

Although ImageNet was created over a decade ago, it remains one of the most influential and powerful image databases available today. Its power and magnitude is matched by its unprecedented societal impact. Although an a posteriori audit might seem redundant a decade after its creation, ImageNet’s continued significance and the culture it has fostered for other large scale datasets warrants an ongoing critical dialogue.

The rest of this paper is structured as follows. In section 2, we cover related work that has explored the ethical dimensions that arise with LSVD. In section 3, we describe the landscape of both the immediate and long term threats individuals and society as a whole encounter due to ill-considered LSVD curation. In Section 4, we propose a set of solutions which might assuage some of the concerns raised in section 3. In Section 5, we present a template quantitative auditing procedure using the ILSVRC2012 dataset as an example and describe the data assets we have curated for the computer vision community to build on. We conclude with broad reflections on LSVDs, society, ethics, and justice.

2 Background and related work

The very declaration of a taxonomy brings some things into existence while rendering others invisible [10]. A gender classification system that conforms to essentialist binaries, for example, operationalizes gender in a cis-centric way resulting in exclusion of non-binary and transgender people [61]. Categories simplify and freeze nuanced and complex narratives, obscuring political and moral reasoning behind a category. Over time, messy and contingent histories hidden behind a category are forgotten and trivialized [97]. With the adoption of taxonomy sources, image datasets inherit seemingly invisible yet profoundly consequential shortcomings. The dataset creation process, its implication for ML systems, and subsequently, the societal impact of these systems has attracted a substantial body of critique. We categorize such body of work into two groups that complement one another. While the first group can be seen as concerned with the broad downstream effects, the other concentrates mainly on the dataset creation process itself.

2.1 Broad critiques

The absence of critical engagement with canonical datasets disproportionately negatively impacts women, racial and ethnic minorities, and vulnerable individuals and communities at the margins of society [7]. For example, image search results both exaggerate stereotypes and systematically under-represent women in search results for occupations [60]; object detection systems designed to detect pedestrians display higher error rates for recognition of demographic groups with dark skin tones [111]

; and gender classification systems show disparities in image classification accuracy where lighter-skin males are classified with the highest accuracy while darker-skin females suffer the most misclassification 

[14]. Gender classification systems that lean on binary and cis-genderist constructs operationalize gender in a trans-exclusive way resulting in tangible harm to trans people [61, 93]. With a persistent trend where minoritized and vulnerable individuals and communities often disproportionately suffer the negative outcomes of ML systems, D’Ignazio and Klein [28] have called for a shift in rethinking ethics not just as a fairness metric to mitigate the narrow concept of bias but as a practice that results in justice for the most negatively impacted. Similarly, Kasy and Abebe [59] contend that perspectives that acknowledge existing inequality and aim to redistribute power are pertinent as opposed to fairness-based perspectives. Such understanding of ethics as justice then requires a focus beyond ‘bias’ and fairnesss’ in LSVDs and requires questioning of how images are sourced, labelled, and what it means for models to be trained on them. One of the most thorough investigations in this regard can be found in [22]. In this recent work, Crawford and Paglen present an in-depth critical examination of ImageNet including the dark and troubling results of classifying people as if they are objects. Offensive and derogatory labels that perpetuate historical and current prejudices are assigned to people’s actual images. The authors emphasise that not only are images that were scraped across the web appropriated as data for computer vision tasks, but also the very act of assigning labels to people based on physical features raises fundamental concerns around reviving long-discredited pseudo-scientific ideologies of physiognomy [114].

2.2 Critiques of the curation phase

Within the dataset creation process, taxonomy sources pass on their limitations and underlying assumptions that are problematic. The adoption of underlying structures presents a challenge where — without critical examination of the architecture — ethically dubious taxonomies are inherited. This has been one of the main challenges for ImageNet given that the dataset is built on the backbone of WordNet’s structure. Acknowledging some of the problems, the authors from the ImageNet team did recently attempt to address [115] the stagnant concept vocabulary of WordNet. They admitted that only 158 out of the 2,832 existing synsets should remain in the person sub-tree333In order to prune all the nodes. They also took into account the imageability

of the synsets and the skewed representation in the images pertaining to the

Image retrieval phase. Nonetheless, some serious problems remain untouched. This motivates us to address in greater depth the overbearing presence of the WordNet effect on image datasets.

2.3 The WordNet Effect

ImageNet is not the only large scale vision dataset that has inherited the shortcomings of the WordNet taxonomy. The 80 million Tiny Images dataset [103] which grandfathered the CIFAR-10/100 datasets and the Tencent ML-images dataset [113] also used the same path. Unlike ImageNet, these datasets have never been audited444In response to the mainstream media covering a pre-print of this work, we were informed that the curators of the dataset have withdrawn the dataset with a note accessible here: or scrutinized and some of the sordid results from inclusion of ethnophaulisms in Tiny-Images dataset’s label space are displayed in Figure 3. The figure demonstrates both the number of images in a subset of the offensive classes (sub-figure(a)) and the exemplar images (sub-figure(b)) that show the images in the noun-class labelled n****r555Due to its offensiveness, we have censored this word (and other words throughout the paper), however, it remains uncensored on the website at the time of writing., a fact that serves as a stark reminder that a great deal of work remains to be done by the ML community at large.

(a) Class-wise counts of the offensive classes
(b) Samples from the class labelled n****r
Figure 3: Results from the 80 Million Tiny Images dataset exemplifying the toxicities of it’s label space

Similarly, we found at least 315 classes666See of the potentially 1593 classes deemed to be non-imageable by the ImageNet curators in [115] still retained in the Tencent-ML-Images dataset that includes image classes such as [transvestite, bad person, fornicatress, orphan, mamma’s boy, and enchantress].

Finally, the labeling and validation of the curation process also present ethical challenges. Recent work such as [44] has explored the intentionally hidden labour, which they have termed as Ghost Work, behind such tasks. Image labeling and validation requires the use of crowd-sourced platforms such as MTurk, often contributing to the exploitation of underpaid and undervalued gig workers. Within the topic of image labeling but with a different dimension and focus, recent work such as [104] and [6] has focused on the shortcomings of human-annotation procedures used during the ImageNet dataset curation. These shortcomings, the authors point out, include single label per-image procedure that causes problems given that real-world images often contain multiple objects, and inaccuracies due to “overly restrictive label proposals”.

3 The threat landscape

In this section, we survey the landscape of harm and threats, both immediate and long term, that emerge with dataset curation practices in the absence of careful ethical considerations and anticipation for negative societal consequences. Our goal here is to bring awareness to the ML and AI community regarding the severity of the threats and to motivate a sense of urgency to act on them. We hope this will result in practices such as the mandatory constitution of Institutional Review Boards (IRB) for large scale dataset curation processes.

3.1 The rise of reverse image search engines, loss of privacy, and the blackmailing threat

Large image datasets, when built without careful consideration of societal implications, pose a threat to the welfare and well-being of individuals. Most often, vulnerable people and marginalised populations pay a disproportionately high price. Reverse image search engines that allow face search such as [1] have gotten remarkably and worryingly efficient in the past year. For a small fee, anyone can use their portal or their API777Please refer to the supplementary material in Appendix A for the screenshots to run an automated process to uncover the “real-world” identities of the humans of ImageNet dataset. For example, in societies where sex work is socially condemned or legally criminalized, re-identification of a sex worker through image search, for example, bears a real danger for the individual victim. Harmful discourse such as revenge porn, are part of a broader continuum of image-based sexual abuse [66]. To further emphasize this specific point, many of the images in classes such as maillot, brassiere, and bikini contain images of beach voyeurism and other non-consensual cases of digital image gathering (covered in detail in Section 5). We were able to (unfortunately) easily map the victims, most of whom are women, in the pictures to “real-world” identities of people belonging to a myriad of backgrounds including teachers, medical professionals, and academic professors using reverse image search engines such as [80]. Paying heed to the possibility of the Streisand effect888The Streisand effect “is a social phenomenon that occurs when an attempt to hide, remove, or censor information has the unintended consequence of further publicizing that information, often via the Internet” [110]

, we took the decision not to divulge any further quantitative or qualitative details on the extent or the location of such images in the dataset besides alerting the curators of the dataset(s) and making a passionate plea to the community not to underestimate the severity of this particular threat vector.

3.2 The emergence of even larger and more opaque datasets

The attempt to build computer vision has been gradual and can be traced as far back as 1966 to Papert’s The Summer Vision Project [76], if not earlier. However, ImageNet, with its vast amounts of data, has not only erected a canonical landmark in the history of AI, it has also paved the way for even bigger, more powerful, and suspiciously opaque datasets. The lack of scrutiny of the ImageNet dataset by the wider computer vision community has only served to embolden institutions, both academic and commercial, to build far bigger datasets without scrutiny (see Table 1). Various highly cited and celebrated papers in recent years [54, 16, 11, 100], for example, have used the unspoken unicorn amongst large scale vision datasets, that is, the JFT-300M dataset [?]999We have decided to purposefully leave the ’?’ in place and plan to revisit it only after the dataset’s creator(s) publish the details of it’s curation. This dataset is inscrutable and operates in the dark, to the extent that there has not even been official communication as to what JFT-300M stands for. All that the ML community knows is it purportedly boasts more than 300M images spread across 18k categories. The open source variant(s) of this, the Open Images V4-5-6 [63] contains a subset of 30.1M images covering 20k categories (and also has an extension dataset with 478k crowd-sourced images across more than 6000 categories). While parsing through some of the images, we found verifiably101010See We performed verification with the uploader of the image via the Flickr link shared. non-consensual images of children that were siphoned off of flickr hinting towards the prevalence of similar issues for JFT-300M from which this was sourced. Besides the other large datasets in Table 1, we have cases such as the CelebA-HQ dataset, which is actually a heavily processed dataset whose grey-box curation process only appears in Appendix-C of [58] where no clarification is provided on this "frequency based visual quality metric" used to sort the images based on quality. Benchmarking any downstream algorithm of such an opaque, biased and (semi-)synthetic dataset will only result in controversial scenarios such as [68], where the authors had to hurriedly incorporate addendums admitting biased results. Hence, it is important to reemphasize that the existence and use of such datasets bear direct and indirect impact on people, given that decision making on social outcomes increasingly leans on ubiquitously integrated AI systems trained and validated on such datasets. Yet, despite such profound consequences, critical questions such as where the data comes from or whether the images were obtained consensually are hardly considered part of the LSVD curation process.

The more nuanced and perhaps indirect impact of ImageNet is the culture that it has cultivated within the broader AI community; a culture where the appropriation of images of real people as raw material free for the taking has come be to perceived as the norm. Such norm and lack of scrutiny has played a role towards the creation of monstrous and secretive datasets without much resistance, prompting further questions such as ‘what other secretive datasets currently exist hidden and guarded under the guise of proprietary assets?’. Current work that has sprung out of secretive datasets, such as Clearview AI [53] 111111

Clearview AI is a US based privately owned technology company that provides facial recognition services to various customers including North American law enforcement agencies. With more than 3 billion photos scraped from the web, the company operated in the dark until its services to law enforcement was reported in late 2019

, points to a deeply worrying and insidious threat not only to vulnerable groups but also to the very meaning of privacy as we know it [57].

3.3 The Creative Commons fallacy

In May 2007 the iconic case of Chang versus Virgin mobile: The school girl, the billboard, and virgin [19] unraveled in front of the world, leading to widespread debate on the uneasy relationship between personal privacy, consent, and image copyright, initiating a substantial corpus of academic debate (see [20, 21, 52, 15]). A Creative Commons license addresses only copyright issues – not privacy rights or consent to use images for training. Yet, many of the efforts beyond ImageNet, including the Open Images dataset [63], have been built on top of the Creative commons loophole that large scale dataset curation agencies interpret as a free for all, consent-included green flag. This, we argue, is fundamentally fallacious as is evinced in the views presented in [69] by the Creative commons organization that reads: “CC licenses were designed to address a specific constraint, which they do very well: unlocking restrictive copyright. But copyright is not a good tool to protect individual privacy, to address research ethics in AI development, or to regulate the use of surveillance tools employed online.”. Datasets culpable of this CC-BY heist such as MegaFace and IBM’s Diversity in Faces have now been deleted in response to the investigations (see [31] for a survey) lending further support to the Creative Commons fallacy.

3.4 Blood diamond effect in models trained on this dataset

Akin to the ivory carving-illegal poaching, and diamond jewelry art-blood diamond nexuses, we posit that there is a similar moral conundrum at play here that effects all downstream applications entailing models trained using a tainted dataset. Often, these transgressions may be rather subtle. In this regard, we pick an examplar field of application that on the surface appears to be a low risk application area: Neural generative art. Neural generative art created using tools such as BigGAN [11] and Art-breeder [95] that in turn use pre-trained deep-learning models trained on ethically dubious datasets, bear the downstream burden121212Please refer to the appendix ( Section B.5) where we demonstrate one such real-world experiment entailing unethically generated neural art replete with responses obtained from human critiques as to what they felt about the imagery being displayed. of the problematic residues from non-consensual image siphoning, thus running afoul of the Wittgensteinian edict of ethics and aesthetics being one and the same. [33]. We also note that there is a privacy-leakage facet to this downstream burden. In the context of face recognition, works such as [96] have demonstrated that CNNs with high predictive power unwittingly accommodate accurate extraction of subsets of the facial images that they were trained on, thus abetting dataset leakage131313We’d like to especially highlight the project [46] for the ground-breaking work on datasets to train such facial recognition systems .

3.5 Perpetuation of unjust and harmful stereotypes

Finally, zooming out and taking a broad perspective allows us to see that the very practice of embarking on a classification, taxonomization, and labeling task endows the classifier with the power to decide what is a legitimate, normal, or correct way of being, acting, and behaving in the social world [10]. For any given society, what comes to be perceived as normal or acceptable is often dictated by dominant ideologies. Systems of classification, which operate within a power asymmetrical social hierarchy, necessarily embed and amplify historical and cultural prejudices, injustices, and biases [97]

. In western societies, “desirable”, “positive”, and “normal” characteristics and ways of being are constructed and maintained in ways that align with the dominant narrative, giving advantage to those that fit the status quo. Groups and individuals on the margins, on the other hand, are often perceived as the “outlier” and the “deviant”. Image classification and labelling practices, without the necessary precautions and awareness of these problematic histories, pick up these stereotypes and prejudices and perpetuate them

[74, 73, 35]. AI systems trained on such data amplify and normalize these stereotypes, inflicting unprecedented harm on those that are already on the margins of society. While the ImageNet team did initiate strong efforts towards course-correction [115], the Tiny Images dataset still contains harmful slurs and offensive labels. And worse, we remain in the dark regarding the secretive and opaque LSVDs.

Figure 4: Class-wise cross-categorical scatter-plots across the cardinality, age and gender scores
Figure 5: Statistics and locationing of the hand-labelled images
Figure 6: Known human co-occurrence based gender-bias analysis
Figure 7: Dataset audit card for the ImageNet dataset

4 Candidate solutions: The path ahead

Decades of work within the fields of Science and Technology Studies (STS) and the Social Sciences show that there is no single straightforward solution to most of the wider social and ethical challenges that we have discussed [99, 5, 28]. These challenges are deeply rooted in social and cultural structures and form part of the fundamental social fabric. Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy [5]. These challenges and tensions will exist as long as humanity continues to operate. Given the breadth of the challenges that we have faced, any attempt for a quick fix risks concealing the problem and providing a false sense of solution. The idea of a complete removal of biases, for example, might in reality be simply hiding them out of sight [43]. Furthermore, many of the challenges (bias, discrimination, injustice) vary with context, history, and place, and are concepts that continually shift and change constituting a moving target [7]. The pursuit of panacea in this context, therefore, is not only unattainable but also misguided. Having said that, there are remedies that can be applied to overcome the specific harms that we have discussed in this paper, which eventually potentially play constituent roles in improving the wider and bigger social and structural issues in the long run.

4.1 Remove, replace, and open strategy

In [115], the authors concluded that within the person sub-tree of the ImageNet dataset, 1593 of the 2832 people categories were potentially offensive labels and planned to "remove all of these from ImageNet.". We strongly advocate a similar path for the offensive noun classes in the Tiny Images dataset that we have identified in section 2.1, as well as images that fall into the categories of verifiably141414We use the term verifiably to denote only those NSFW images that were hand-annotated by the volunteers indicating that they also contained the textual context that was of pornographic phraseology. We have an example grid of these images in the Appendix. pornographic, shot in a non-consensual setting (up-skirt), beach voyeuristic, and exposed genitalia in the ImageNet-ILSVRC-2012 dataset. In cases where the image category is retained but the images are not, the option of replacement with consensually shot financially compensated images arises. It is possible that some of the people in these images might come forward to consent and contribute their images in exchange for fair financial compensation, credit, or out of sheer altruism [12]. We re-emphasize that our consternation focuses on the non-consensual aspect of the images and not on the category-class and the ensuing content of the images in it. This solution, however, brings forth further questions: does this make image datasets accessible only to those who can afford it? Will we end up with pool of images with a predominantly financially disadvantaged participants?

Science is self-correcting so long as it is accessible and open to critical engagement. We have tried to engage critically and map actionable ways forward given what we know of these LSVDs. The secretive and opaque LSVDs, however, thread a dangerous territory, given that they directly or indirectly impact society but remain hidden and inaccessible. Although the net benefit of the open science movement remains controversial, we strongly contend that making LSVDs open and accessible allows audits of these datasets, which is a first step towards a responsible scientific endeavour.

4.2 Automated downstream removal from reverse search engines that allow for image deletion requests

We found that some of the reverse image search engines do allow for users to remove particular image from our [sic] index via their "Report abuse" portals151515See This allows for dataset auditors to enlist images found in their dataset(s) containing identifiable individuals and direct them towards a guided image removal process from the reverse image search engine(s), in order to mitigate some aspects of immediate harm.

4.3 Differentially private obfuscation of the faces

This path entails harnessing techniques such as DP-Blur [36] with quantifiable privacy guarantees to obfuscate the identity of the humans in the image. The Inclusive images challenge [94], for example, already incorporated blurring during dataset curation161616 and addressed the downstream effects surrounding change in predictive power of the models trained on the blurred versions of the dataset curated. We believe that replication of this template that also clearly included avenues for recourse in case of an erroneously non-blurred image being sighted by a researcher will be a step in the right direction for the community at large.

4.4 Synthetic-to-real and Dataset distillation

The basic idea here is to utilize (or augment) synthetic images in lieu of real images during model training. Approaches include using hand-drawn sketch images (ImageNet-Sketch [106]), using GAN generated images [29] and techniques such as Dataset distillation [107], where a dataset or a subset of a dataset is distilled down to a few representative synthetic samples. This is a nascent field with some promising results emerging in unsupervised domain adaptation across visual domains [78] and universal digit classification [83].

4.5 Ethics-reinforced filtering during the curation

The specific ethical transgressions that emerged during our longitudinal analysis of ImageNet could have been prevented if there were explicit instructions provided to the MTurkers during the dataset curation phase to enable filtering of these images at the source (See Fig.9 in [87] for example). We hope ethics checks become an integral part of the User-Interface deployed during the humans-in-the-loop validation phase for future dataset curation endeavors.

4.6 Dataset audit cards

As emphasized in Section 4, context is crucial in determining whether a certain dataset ethical or problematic as it provides a vital background information and datasheets are an effective way of providing context. Much along the lines of model cards [70] and datasheet for datasets [41], we propose dissemination of dataset audit cards. This allows large scale image dataset curators to publish the goals, curation procedures, known shortcomings and caveats alongside their dataset dissemination. In Figure 7, we have curated an example dataset audit card for the ImageNet dataset using the quantitative analyses carried out in Section 5

5 Quantitative dataset auditing: ImageNet as a template

file_name shape file_contents
df_insightface_stats.csv (1000, 30) 24 classwise statistical parameters obtained by running the InsightFace model ([45]) on the ImageNet dataset
df_audit_age_gender_dex.csv (1000, 12) 11 classwise (ordered by the wordnet-id) statistical parameters obtained from the json files (of the DEX paper) [89]
df_nsfw.csv (1000, 5) The mean and std of the NSFW scores of the train and val images arranged per-class. (Unnamed: 0: WordNetID of the class)
df_acc_classwise_resnet50.csv (1000, 7) Classwise accuracy metrics (& the image level preds) obtained by running the ResNet50 model on ImageNet train and Val sets
df_acc_classwise_NasNet_mobile.csv (1000, 7) Classwise accuracy metrics (& the image level preds) obtained by running the NasNet model on ImageNet train and Val sets
df_imagenet_names_umap.csv (1000, 5) Dataframe with 2D UMAP embeddings of the Glove vectors of the classes of the ImageNet dataset
df_census_imagenet_61.csv (1000, 61) The MAIN census dataframe covering class-wise metrics across 61 parameters, all of which are explained in df_census_columns_interpretation.csv
df_census_columns_interpretation.csv (61, 2) The interpretations of the 61 metrics of the census dataframe above
df_hand_survey.csv (61, 3) Dataframe contaimning the details of the 61 images unearthed via hand survey (Do not pay heed to 61. it is a mere coincidence)
df_classes_tiny_images_3.csv (75846, 3) Dataframe containing the class_ind, class_name (wordnet noun) and n_images
df_dog_analysis.csv (7, 4) Dataframe containing breed, gender_ratio and survey result from the paper Breed differences in canine aggression’
Table 2: Meta datasets curated during the audit processes

We performed a cross-categorical quantitative analysis of ImageNet to assess the extent of the ethical transgressions and the feasibility of model-annotation based approaches. This resulted in an ImageNet census, entailing both image-level as well as class-level analysis across the different metrics (see supplementary section) covering Count, Age and Gender (CAG), NSFW-scoring, semanticity of class labels and accuracy of classification using pre-trained models. We have distilled the important revelations of this census as a dataset audit card presented in Figure 7. This audit also entailed a human-in-the-loop based hybrid-approach that the pre-trained-model annotations (along the lines of [30, 115]) to segment the large dataset into smaller sub-sets and hand-label the smaller subsets to generate two lists covering 62 misogynistic images and 30 image-classes with co-occuring children. We used the DEX [89] and the InsightFace [45] pre-trained models171717While harnessing these pre-trained gender classification models, we would like to strongly emphasize that the specific models and the problems that they were intended to solve, when taken in isolation, stand on ethically dubious grounds themselves. In this regard, we strongly concur with previous work such as [109] that gender classification based on appearance of a person in a digital image is both scientifically flawed and is a technology that bears a high risk of systemic abuse. to generate the cardinality, gender skewness, and age-distribution results captured in Figure 4. This resulted in discovery of 83,436 images with persons, encompassing 101,070 to 132,201 individuals, thus constituting of the dataset. Further, we munged together gender, age, class semanticity181818 Obtained using GloVe embeddings [79] on the labels and NSFW content flagging information from the pre-trained NSFW-MobileNet-v2 model [40] to help perform a guided search of misogynistic consent-violating transgressions. This resulted in discovery of five dozen plus images191919Listed in df_hand_survey.csv across four categories: beach-voyeur-photography, exposed-private-parts, verifiably pornographic and upskirt in the following classes: 445-Bikini, 638 -maillot, 639-tank suit, 655-miniskirt and 459-brassiere (see Figure 5). Lastly, we harnessed literature from areas spanning from dog-ownership bias ([55],[86]) to engendering of musical instruments ([112], [13]) to generate analysis of subtle forms of human co-occurrence-based gender bias in Figure 6.
Captured in Table 2 are the details of the csv formatted data assets curated for the community to build on. The CAG statistics are covered in df_insightface_stats.csv and df_audit_age_gender_dex.csv. Similarly, we have also curated NSFW scoring (df_nsfw.csv), Accuracy (df_acc_classwise_resnet50/_NasNet_mobile.csv) and Semanticity (df_imagenet_names_umap.csv) datasets as well. df_census_imagenet_61.csv contains the 61 cumulative paramaters for each of the 1000 classes (with their column interpretations in df_census_columns_interpretation.csv). We have duly open-sourced these meta-datasets and 14 tutorial-styled Jupyter notebooks (spanning both ImageNet and Tiny-Images datasets) for community access202020 Link:

Metric Models used
Count, Age and Gender DEX [89], InsightFace [45], RetinaFace [26], ArcFace [25]
NSFW-scoring NSFW-MobileNet-V2-224 [40]
Semanticity Glove [79], UMAP [67]
Classification Accuracy Resent-50 [47], NasNet-mobile [118]
Table 3: Metrics considered and pre-trained models used
132,201 80,340 3,096 97,678 3,392 26,195 71,439 645 2,307
Table 4: Humans of the imagenet dataset: How many?
Key: .(O:Overall,W:Women,M: Men)
class_number label mean_gender_audit mean_age_audit mean_nsfw_train
445 bikini, two-piece 0.18 24.89 0.859
638 maillot 0.18 25.91 0.802
639 maillot, tank suit 0.18 26.67 0.769
655 miniskirt, mini 0.19 29.95 0.62
459 brassiere, bra, bandeau 0.16 25.03 0.61
Table 5: Table of the 5 classes for further investigation that emerged from the NSFW analysis

6 Conclusion and discussion

We have sought to draw the attention of the machine learning community towards the societal and ethical implications of large scale datasets, such as the problem of non-consensual images and the oft-hidden problems of categorizing people. ImageNet has been championed as one of the most incredible breakthroughs in computer vision, and AI in general. We indeed celebrate ImageNet’s achievement and recognize the creators’ efforts to grapple with some ethical questions. Nonetheless, ImageNet as well as other large image datasets remain troublesome. In hindsight, perhaps the ideal time to have raised ethical concerns regarding LSVD curation would have been in 1966 at the birth of

The Summer Vision Project [76]. The right time after that was when the creators of ImageNet embarked on the project to “map out the entire world of objects”. Nonetheless, these are crucial conversations that the computer vision community needs to engage with now given the rapid democratization of imaging scraping tools ([92, 91, 105]) and dataset-zoos ([56, 102, 84]). The continued silence will only serve to cause more harm than good in the future. In this regard, we have outlined a few solutions, including audit cards, that can be considered to ameliorate some of the concerns raised. We have also curated meta-datasets and open-sourced the code to carry out quantitative auditing using the ILSVRC2012 dataset as a template. However, we posit that the deeper problems are rooted in the wider structural traditions, incentives, and discourse of a field that treats ethical issues as an afterthought. A field where in the wild is often a euphemism for without consent. We are up against a system that has veritably mastered ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping, and ethics shirking [38].

Within such an ingrained tradition, even the most thoughtful scholar can find it challenging to pursue work outside the frame of the “tradition”. Subsequently, radical ethics that challenge deeply ingrained traditions need to be incentivised and rewarded in order to bring about a shift in culture that centres justice and the welfare of disproportionately impacted communities. We urge the machine learning community to pay close attention to the direct and indirect impact of our work on society, especially on vulnerable groups. Awareness of historical antecedents, contextual, and political dimensions of current work is imperative is this regard. We hope this work contributes in raising awareness regarding the need to cultivate a justice centred practice and motivates the constitution of IRBs for large scale dataset curation processes.

7 Acknowledgements

This work was supported, in part, by Science Foundation Ireland grant 13/RC/2094 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero - the Irish Software Research Centre (

The authors would like to thank Alex Hanna, Andrea E. Martin, Anthony Ventresque, Elayne Ruane, John Whaley, Mariya Vasileva, Nicolas Le Roux, Olivia Guest, Os Keyes, Reubs J. Walsh, Sang Han, and Thomas Laurent for their useful feedback on an earlier version of this manuscript.

Appendix A Risk of privacy loss via reverse search engines

As covered in the main paper, reverse image search engines that facilitate face search such as [1] have gotten remarkably and worryingly efficient in the past year. For a small fee, anyone can use their portal or their API to run an automated process and uncover the “real-world” identities of the humans of ImageNet dataset. While all genders in the imagenet dataset are under this risk, there is asymmetric risk here as the high NSFW classes such as bra, bikini and maillot are often the ones with higher female-to-men ratio (See Figure 15). Figure 8 showcases a snapshot image of one such reverse image search portal to demonstrate how easy it is for anyone to access their GUI and uncover “real world” identities of people which can lead to catastrophic downstream risks such as blackmailing and other forms on online abuse.

Figure 8: Snapshot of a popular reverse image search website

Appendix B Quantitative auditing

In this section, we cover the details of performing the quantitative analysis on the ImageNet dataset including the following metrics: Person CAG (Count -Age - Gender) , NSFW scoring of the images, Semanticity and classification accuracy. The pre-trained models used in this endeavor are covered in Table 3. All of these analyses and the generated meta-datasets have been open sourced at Figure 9 covers the details of all the jupyter notebooks authored to generate the datasets covered in Table 2.

Figure 9: Visualization of all the notebooks and dataset assets curated during the quantitative analysis

b.1 Count, Age and Gender

Figure 10: An example image with the output bounding boxes and the confidence scores of the humans detected in the image by the DEX model([89])

In order to perform a human-centric census covering metrics such as count, age, and gender, we used the InsightFace toolkit for face analysis [45], that provided implementations of: ArcFace for deep face recognition [26] and Retina-Face for face localisation (bounding-box generation) [26]. We then combined the results of these models with the results obtained from [30] that used the DEX [89] model. The results are as shown in Table 4 that captures the summary statistics for the ILSVRC2012 dataset. In this table, the lower case denotes the number of images with persons identified in them whereas indicates the number of persons212121The difference is simply on account of more than one person being identified by the model in a given image. The superscript indicates the algorithm used (DEX or InsightFace (if) ) whereas the subscript has two fields: The train or validation subset indicator and the census gender-category. For example, implies that there were 3096 images in the ImageNet validation set (out of ) where the InsightFace models were able to detect a person’s face.

As shown, the InsightFace model identified 101,070 persons across 83,436 images (including the train and validation subsets) which puts the prevalence rate of persons whose presence in the dataset exists sans explicit consent to be around which is less aggressive compared to the predicted by the DEX model (that focussed on the training subset), which has a higher identification false positive rate. An example of this can be seen in Fig 10 which showcases an example image with the bounding boxes of the detected persons in the image.

Much akin to [30], we found a strong bias towards (relatively older) male presence (73,746 with a mean age of 33.24 compared to 26,840 with a mean age of 25.58). At this juncture, we would like to reemphasize that these high accuracy pre-trained models can indeed be highly error prone conditioned on the ethnicity of the person, as analyzed in [30, 14] and we would like to invite the community to re-audit these images with better and more ethically responsible tools (See Fig 11 for example of errors we could spot during the inference stage).

Figure (a)a

, presents the class-wise estimates of the number of persons in the dataset using the DEX and the InsightFace models. In Figure

(b)b, we capture the variation in the estimates of count, gender and age of the DEX and the InsightFace models.

Before delving in to the discussions of the results obtained, we define the parameters that were measured. To begin, we denote to be the binary face-present indicator variable( ) with regards to the image indexed , (in the superscripts) to be the algorithm used (), and to be the number of images in the class . Now, we define the class-level mean person count (), mean-gender-skewness score () and mean-age () to be,

Here, is the age-estimate of the person generated by algorithm in the image and and

represent the mean and standard-deviation of the gender-estimate of the images belonging to class

and estimated by algorithm respectively.

Figure 11:

An example image with the output bounding boxes and the estimated ages/ (binarized) genders of the persons detected in the image by the

InsightFace model. (Here 0: female and 1: Male)
(a) Class-wise estimates of number of humans in the images
(b) Scatter-plots with correlations covering the cardinality, age and gender estimates
Figure 14: Juxtaposing the results from the DEX and the InsightFace models

With regards to the first scatter-plot in Figure 14(b), we observe that the estimated class-wise counts of persons () detected by the DEX and InsightFace models in the images were in strong agreement () which helps to further establish the global person prevalence rate in the images to be in the order of . These scatter-plots constitute Figure 4 of the dataset audit card (Figure 7).
Now, we would like to draw the attention of the reader towards the weaker correlation () when it came to gender-skewness () and the mean age-estimates () scatter-plots in Figure 14(b). Given that the algorithms used are state-of-the-art with regards to the datasets they have been trained on (see [89] and [45]), the high disagreement on a “neutral” dataset like ImageNet exposes the frailties of these algorithmic pipelines upon experiencing population shifts in the test dataset. This, we believe, lends further credence to the studies that have demonstrated poor reliability of these so-termed accurate models upon change of the underlying demographics (see [30] and [14]) and further supports the need to move away from gender classification on account of not just the inherent moral and ethical repugnance of the task itself but also on its lack of merit for scientific validity [109].

b.2 NSFW scoring aided misogynistic imagery hand-labeling

Previous journalistic efforts (see [85]) had revealed the presence of strongly misogynistic content in the ImageNet dataset, specifically in the categories of beach-voyeur-photography, upskirt images, verifiably pornographic and exposed private-parts. These specific four categories have been well researched in digital criminology and intersectional feminism (see [49, 66, 82, 81]) and have formed the backbone of several legislations worldwide (see [65],[42]). In order to help generate a hand labelled dataset of these images amongst more than 1.3 million images, we used a hybrid human-in-the-loop approach where we first formed a smaller subset of images from image classes filtered using a model-annotated NSFW-average score as a proxy. For this, we used the NSFW-Mobilenet-v2 model [40] which is an image-classification model with the output classes being [drawings, hentai, neutral, porn, sexy]. We defined the NSFW score of an image by summing up the softmax values of the [hentai, porn, sexy] subset of classes and estimated the mean-NSFW score of all of the images of a class to obtain the results portrayed in Figure 16. On the left hand side of Figure 16, we see the scatter-plot of the mean-NSFW scores plotted against the mean-gender scores (obtained from the DEX model estimates) for the 1000 imagenet classes. We then found five natural clusters upon using the Affinity Propagation algorithm [39]. Given the 0:FEMALE|1:MALE gender assignments in the model we used (see [30]), classes with lower mean-gender scores allude towards a women-majority class). The specific details of the highlighted cluster in the scatter-plot in Figure 16 are displayed in Table 5. Further introducing the age dimension (by way of utilising the mean-age metric for each class), we see in the right hand side of Figure 16, that the classes with the highest NSFW scores were those where the dominating demographic was that of young women. With this shortlisting methodology, we were left with approximately 7000 images which were then hand labelled by a team of five volunteers (three male, two female, all aged between 23-45) to curate a list of images where there was complete agreement over the 4 class assignment. We have open-sourced the hand-curated list (see Table 6), and the summary results are as showcased in Figure 19. In sub-figure Figure (a)a, we see the cross-tabulated class-wise counts of the four categories of images222222 This constitutes Figure 5( in the data audit card) across the imagenet classes and in Figure (b)b, we present the histogram-plots of these 61 hand-labelled images across the imagenet classes. As seen, the bikini, two-piece class with a mean NSFW score of was the main image class with 24 confirmed beach-voyeur pictures.

Here, we would like to strongly reemphasise that we are disseminating this list as a community resource so as to facilitate further scholarly engagement and also, if need be, to allow scholars in countries where incriminating laws (see [32]) may exist, to deal with in the appropriate topical way deemed fit. We certainly admit to the primacy of context in which the objectionable content appears. For example, the image n03617480_6206.jpeg in the class n03617480 - kimono that contained genital exposure, turned out to be a photographic bondage art piece shot by Nobuyoshi Araki[75] that straddles the fine line between scopophilic eroticism and pornography. But, as explored in [32], the mere possession of a digital copy of this picture would be punishable by law in other nations and we believe that these factors have to be considered contextually while disseminating a large scale image dataset and should be detailed as caveats in the dissemination document.

b.2.1 NSFW and semanticity of classes

We also analyzed the relationship between the semanticity of classes and NSFW scores. Firstly, we obtained a representative word for each of the 1000 class labels in ILSVRC2012 and used [79] to generate dense word-vector Glove embeddings in 300-D. Further, in order to generate the 2D/3D scatter-plots in Figure 15, we used the UMAP [67] algorithm to perform dimensionality reduction. df_imagenet_names_umap.csv contains the 2D UMAP embeddings of the resultant Glove vectors of the classes that are then visualized in Figure 15 (a). In Figure 15 (b), we see the 3D surface plot of the 2D UMAP semantic dimensions versus the NSFW scores. As seen, it is peaky in specific points of the semantic space of the label categories mapping to classes such as brassier, bikini and maillot.

Figure 15: Figure showcasing the relationship between the semanticity of classes and the class-wise mean NSFW scores

b.3 Dogs to musical instruments: Co-occurrence based gender biases

Social, historical, and cultural biases prevalent in the society feed into datasets and the statistical models trained on them. In the context of Natural Language Processing (NLP), the framework of lexical co-occurrence has been harnessed to tease out these biases, especially in the context of gender biases. In

[101], the authors analyzed occupation words stereotypically perceived as male (that they termed as M-biased words) as well as occupation words stereotypically perceived as female (F-biased

words) in large text corpora and the ensuing downstream effects when used to generate contextual word representations in SoTA models such as such as BERT and GPT-2. Further, in

[88], direct normalized co-occurrence associations between the word and the representative concept words were proposed as a novel corpus bias measurement method, and its efficacy was demonstrated with regards to the actual gender bias statistics of the U.S. job market and its estimates measured via the text corpora. In the context of the ImageNet dataset, we investigated if such co-occurrence biases do exist in the context of human co-occurrence in the images. Previously, in [98], the authors had explored the biased representation learning of an ImageNet trained model by considering the class basketball where images containing black persons were deemed prototypical. Here, we tried to investigate if the gender of the person co-occurring in the background alongside the non-person class was skewed along the lines that it is purported to be in related academic work. We performed these investigations in the context of person-occurrence with regards to dog-breeds as well as musical instruments. Presented in Figure 22 (a) are the conditional violin plots relating the dog-breed group of the image class of a subset of the ImageNet dataset in comparison with the with the mean gender score obtained from the DEX model analyses. We obtained these measurements in two phases. In the first phase, we grouped the ImageNet classes of dog-breeds in to the following 7 groups: [Toy, Hound ,Sporting, Terrier, Non-Sporting, Working, Herding] following the formal American Kennel Club232323AKC claims that registered breeds are assigned to one of seven groups representing characteristics and functions the breeds were originally bred for. (AKC) groupings (see [18]). The remaining breeds not in the AKC list were placed into the Unknown group. Once grouped, we computed the gender-conditioned population spreads of person-concurrence using the mean-gender value of the constituent image classes obtained estimated from [30]. Prior literature (see [55, 86]) has explored the nexus between the perceived manliness of dog groups and the ownership gender. These stereotypical associations were indeed reflected in the person co-occurrence gender distributions in Figure (a)a, where we see that the so perceived masculine dog groups belonging to the set [Non-Sporting, Working, Herding] had a stronger male-gender co-occurrence bias.
In a similar vein, in Figure (b)b we present the gender-skewness () variation amongst the co-occurring persons across the 17 imagenet musical instrument classes. Works such as [23], [116] and [13] have explored in depth, the gender biases there exist in musical instrument selection. As stated in [112], instruments such as the cello, oboe, flute and violin have been stereotypically tagged to be feminine whereas instruments such as the drum, banjo, trombone, trumpet and the saxophone were the so-termed masculine instruments in the western context. While these stereotypes represent current and historical norms, the west-centric-bias 242424See of the search engine used to curate the dataset has resulted in the mirroring of these topical real-world association biases. As seen in Figure (b)b, harp, cello, oboe, flute and violin indeed had the strongest pro-women bias where as drum, banjo, trombone, trumpet and saxophone were the classes with the strongest male leaning skewness scores.

b.4 Classes containing pictures of infants

We found this category to be particularly pertinent both under the wake of strong legislations protecting privacy of children’s digital images as well as the extent of it. We found pictures of infants and children across the following 30 image classes (and possibly more): [’bassinet’, ’cradle’, ’crib’, ’bib’, ’diaper’, ’bubble’, ’sunscreen’, ’plastic bag’, ’hamper’, ’seat belt’, ’bath towel’, ’mask’, ’bow-tie’, ’tub’, ’bucket’, ’umbrella’, ’punching bag’, ’maillot - tank suit’, ’swing’, ’pajama’, ’horizontal bar’, ’computer keyboard’, ’shoe-shop’, ’soccer ball’, ’croquet ball’, ’sunglasses’, ’ladles’, ’tricycle - trike - velocipede’, ’screwdriver’, ’carousel’]. What was particularly unsettling was the prevalence of entire classes such as ’bassinet’, ’cradle’, ’crib’ and ’bib’ that had a very high density of images of infants. We believe this might have legal ramifications as well. For example, Article 8 of the European Union General Data Protection Regulation (GDPR), specifically deals with the conditions applicable to child’s consent in relation to information society services [77]. The associated Recital 38 states verbatim that Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. Such specific protection should, in particular, apply to the use of personal data of children for the purposes of marketing or creating personality or user profiles and the collection of personal data with regard to children when using services offered directly to a child. Further, Article 14 of GDPR explicitly states: Information to be provided where personal data have not been obtained from the data subject. We advocate allying with the legal community in this regard to address the concerns raised above.

b.5 Blood diamond effect in models trained on this dataset

Akin to the ivory carving-illegal poaching and diamond jewelry art-blood diamond nexuses, we posit there is a similar moral conundrum at play here and would like to instigate a conversation amongst the neural artists in the community. The emergence of tools such as BigGAN [11] and GAN-breeder [95] has ushered in an exciting new flavor of generative digital art [9], generated using deep neural networks (see [51] for a survey). A cursory search on twitter252525 reveals hundreds of interesting art-works created using BigGANs. There are many detailed blog-posts262626 on generating neural art by beginning with seed images and performing nifty experiments in the latent space of BigGANs. At the point of writing the final version of this paper, (6/26/2020, 10:34 PM PST), users on the ArtBreeder app272727 had generated 64683549 images. Further, Christie’s, the British auction house behemoth, recently hailed the selling of the neural network generated Portrait of Edmond Belamy for an incredible as signalling the arrival of AI art on the world auction stage[17]. Given the rapid growth of this field, we believe this is the right time to have a critical conversation about a particularly dark ethical consequence of using such frameworks that entail models trained on the ImageNet dataset which has many images that are pornographic, non-consensual, voyeuristic and also entail underage nudity. We argue the use of ill-considered seed images to train the models trickles down to the final art-form in a way similar to the blood-diamond syndrome in jewelry art [37].

An example: Consider the neural art image in Figure 23 we generated using the GanBreeder app. On first appearance, it is not very evident as to what the constituent seed classes are that went into the creation of this neural artwork image. When we solicited volunteers online to critique the artwork (see the collection of responses in Table 7), none had an inkling regarding a rather sinister trickle down effect at play here. As it turns out, we craftily generated this image using hand-picked specific instances of children images emanating from what we have shown are two problematic seed image classes: Bikini and Brassiere. More specifically, for this particular image, we set the Gene weights to be: [Bikini: 42.35, Brassiere: 31.66, Comic Book - 84.84 ]. We would like to strongly emphasize at this juncture that the problem does not emanate from a visual patriarchal mindset [3], whereby we associate female undergarment imagery to be somehow unethical, but the root cause lies in the fact that many of the images curated into the dataset (at least with regards to the 2 above mentioned classes) were voyeuristic, pornographic, non-consensual and also entailed underage nudity.

Figure 16: Class-wise cross-categorical scatter-plots across the age, gender and NSFW score estimates
(a) Cross-tabulated grid-plot of the co-occurrence of the imagenet classes with the hand-labelled categories
(b) Histogram-plots of the hand-labelled images
Figure 19: Plots showcasing the statistics of the hand-survey across the beach-voyeur, exposed-private-parts, upskirt, verifiably-pornographic image categories
(a) Categorized violin plot demonstrating the class-wise mean gender scores across the dog-breed image groups
(b) Gender skewness scores across the different musical instrument image classes
Figure 22: Plots showcasing the human co-occurrence based gender-bias analysis

width=0.85 wordnet_id label mean_nsfw_train category file_names n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_11383.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_12451.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_13794.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_14133.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_15158.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_15170.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_15864.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_17.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_17291.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_17410.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_18107.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_18124.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_18260.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_20096.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_22044.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_283.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_3414.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_3536.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_4.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_5713.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_9181.JPEG n02837789 bikini, two-piece 0.859369 beach_voyeur n02837789_9859.JPEG n02837789 bikini, two-piece 0.859369 exposed_private_parts n02837789_17069.JPEG n02837789 bikini, two-piece 0.859369 exposed_private_parts n02837789_19619.JPEG n02892767 brassiere, bra, bandeau 0.610233 exposed_private_parts n02892767_19629.JPEG n02892767 brassiere, bra, bandeau 0.610233 exposed_private_parts n02892767_3235.JPEG n02892767 brassiere, bra, bandeau 0.610233 upskirt n02892767_17717.JPEG n02892767 brassiere, bra, bandeau 0.610233 verifiably_pornographic n02892767_5914.JPEG n03527444 holster 0.058000 exposed_private_parts n03527444_12661.JPEG n03617480 kimono 0.091925 exposed_private_parts n03617480_6206.JPEG n03710637 maillot 0.801976 beach_voyeur ILSVRC2012_val_00021081.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_15836.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_272.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_3832.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_5095.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_5373.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_5386.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_66.JPEG n03710637 maillot 0.801976 beach_voyeur n03710637_7074.JPEG n03710637 maillot 0.801976 exposed_private_parts n03710637_6756.JPEG n03710721 maillot, tank suit 0.768278 beach_voyeur n03710721_1812.JPEG n03710721 maillot, tank suit 0.768278 beach_voyeur n03710721_3040.JPEG n03710721 maillot, tank suit 0.768278 beach_voyeur n03710721_3488.JPEG n03710721 maillot, tank suit 0.768278 beach_voyeur n03710721_7542.JPEG n03710721 maillot, tank suit 0.768278 beach_voyeur n03710721_8122.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_10283.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_18237.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_2462.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_2920.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_3615.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_4096.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_4203.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_6214.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_8550.JPEG n03770439 miniskirt, mini 0.619425 upskirt n03770439_9676.JPEG n03770439 miniskirt, mini 0.619425 verifiably_pornographic n03770439_12003.JPEG n03770439 miniskirt, mini 0.619425 verifiably_pornographic n03770439_1347.JPEG n04209133 shower cap 0.130216 exposed_private_parts n04209133_10606.JPEG n04209133 shower cap 0.130216 exposed_private_parts n04209133_206.JPEG n04209133 shower cap 0.130216 exposed_private_parts n04209133_716.JPEG

Table 6: Table containing the results of hand surveyed images
Figure 23: An example neural art image generated by the authors using the ArtBreeder app [Gene weights: Bikini: 42.35, Brassiere: 31.66, Comic Book - 84.84 ]
Reviewer-ID Review
A- Grad student, CMU SCS
This one reminds me of a mix between graffiti and paper mache using
newspaper with color images or magazines . My attention is immediately
drawn to near the top of the image which, at first glance, appears to be a
red halo of sorts, but upon further consideration, looks to be long black
branching horns on a glowing red background.
My attention then went to the center top portion, where the "horns" were
coming from, which appeared to be the head or skull of a moose or
something similar. The body of the creature appears to be of human-like
form in a crucifix position, of sorts. The image appears more and more
chaotic the further down one looks.
B- Grad student, Stanford CS
Antisymmetric: left side is very artistic, rich in flavor and shades;
right is more monotonic but has more texture.
Reminds me of the two different sides of the brain through the anti-symmetry
C- Data Scientist, Facebook Inc Futurism
D- CS undergrad, U-Michigan
It’s visually confusing in the sense that I couldn’t tell if I was
looking at a 3D object with a colorful background or a painting.
It’s not just abstract, but also mysteriously detailed
in areas to the point that I doubt that a human created these
E - Senior software engineer, Mt View
The symmetry implies a sort of intentionally.
I get a sense of Picasso mixed with Frieda Callo[sic] here.
F- Data Scientist, SF
Reminds me of a bee and very colorful flowers, but with some
nightmarish masks hidden in some places. Very tropical
Table 7: Responses received for the neural art image in Fig 23

b.6 Error analysis

Figure 24: On accuracy variations and human delta

Given how besotted the computer vision community is with regards to classification accuracy metrics, we decided to indulge in devil’s advocacy by delving into the nature of variation of class-wise top-5 accuracies in those classes where humans co-occur asymmetrically between the training and validation sets. For this, we performed inference using the ResNet50 [47] and NasNet [118] models and sorted all the 1000 classes as per the ratios (termed human-delta in the figure) and compared their accuracies with regards to the general population (amongst the 1000 classes). As gathered from Figure 24, we saw a statistically significant drop in top-5 accuracies () for the top-25 human-delta classes, thereby motivating that even for the purveyors of scientism fuelled pragmatism, there is motivation here to pay heed to the problem of humans in images. We would like to reemphasize that we are most certainly not advocating this to be the prima causa for instigating a cultural change in the computer vision community, but are sharing these resources and nuances for further investigation.

Appendix C Broader impact statement and a wish list

We embarked on this project with an aspiration to illustrate how problematic large scale image dataset curations are both in academia and industry and the need for a fundamental change. Through the course of this work, we solicited and incorporated feedback from scholars in the field who have pointed us towards three valid critiques that we would like to address first. To begin with, we solemnly acknowledge the moral paradox in our use of pre-trained gender classification models for auditing the dataset and duly address this in the previous section. Secondly, as covered in Section 3 on the threat landscape, we also considered the risks of the possible Streissand effect with regards to deanonymization of the persons in the dataset that ultimately lead us to not dive further into the quantitative or qualitative aspects of our findings in this regard, besides conveying a specific example via email to the curator of the dataset from which the deanonymization arose. Thirdly, we would like to acknowledge the continued efforts of ImageNet curators to improve the dataset. Although there remains much work to be done, in the large scheme of things and compared to secretive and opaque datasets, the ImageNet dataset allows examinations. Having said that, curating large datasets comes with responsibility (especially given such dataset directly or indirectly impact individual lives and the social world) and all curators need to be held accountable for what they create. With these caveats firmly in tow, we now proceed to conclude with the following Wish List of the impact we hope this work may bring about.

c.1 Proactive approach over reactive course corrections

We aspire to see the institutions and individulas curating these large scale datasets to be proactive in establishing the primacy of ethics in the dataset curation process and not just reacting to exposes and pursing posthoc course corrections as an afterthought. We would be well served to remind ourselves that it took the community 11 years to go from the first peer-reviewed dissemination [24] of the imagenet dataset to achieving the first meaningful course correction in [115] whereas the number of floating-point operations required to train a classifier to AlexNet-level performance on ImageNet had decreased by a factor of 44x between 2012 and 2019 [50]. This, we believe, demonstrates where the priorities lie and this is precisely where we seek to see the most impact.

c.2 Bluewashing of AI ethics and revisiting the enterprise of Big data

At the outset, we question if Big Data can ever operate in a manner that caters the needs and welfares of marginalized communities - those disproportionately impacted by algorithmic injustice. Automated large scale data harvesting forays, by their very volition, tend to be BIG, in the sense that they are inherently prone to Bias, are Imperceptive to the lessons of human condition and recorded history of vulnerable people and Guileful to exploit the loopholes of legal frameworks that allow siphoning off of lived experiences of disfranchised individuals who have little to no agency and recourse to contend Big Data practices. Both collective silence and empty lip service 282828, i.e. caricatured appropriations of ethical transgressions entailing ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping and ethics shirking [38] cause harm and damage. Given that these datasets emerged from institutions such as Google, Stanford, NYU and MIT, all with a substantial number of staff researching AI ethics and policy, we cannot help but feel that this hints towards not just compartmentalization and fetishization of ethics as a hot topic but also shrewd usage of the ethicists as agents of activism outsourcing.

c.3 Arresting the creative commons loot

As covered in the main paper, we could like to see this trend of using the creative commons loophole as an excuse for circumventing the difficult terrain of informed consent. We should, as a field, aspire to treat consent in the same rigorous way as researchers and practitioners in fields such as anthropological studies or medical studies. In this work, we have sought to draw the attention of the Machine Learning community towards the societal and ethical implications of large scale datasets, such as the problem of non-consensual images and the oft-hidden problems of categorizing people. We were inspired by the adage of Secrecy begets tyranny292929From Robert A. Heinlein’s 1961 science fiction novel titled Stranger in a Strange Land [48] and wanted to issue this as a call to the Machine Learning community to pay close attention to the direct and indirect impact of our work on society, especially on vulnerable groups. We hope this work contributes to raising awareness and adds to a continued discussion of ethics in Machine Learning, along with many other scholars that have been elucidating algorithmic bias, injustice, and harm.


  • [1] Face search • pimeyes., May 2020. (Accessed on 05/04/2020).
  • [2] Md Zahangir Alom, Tarek M Taha, Christopher Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Brian C Van Esesn, Abdul A S Awwal, and Vijayan K Asari. The history began from alexnet: A comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164, 2018.
  • [3] Stephanie Baran. Visual patriarchy: Peta advertising and the commodification of sexualized bodies. In Women and Nature?, pages 43–56. Routledge, 2017.
  • [4] Emily Bazelon. Nazi anatomy history: The origins of conservatives’ anti-abortion claims that rape can’t cause pregnancy., Nov 2013. (Accessed on 06/16/2020).
  • [5] Ruha Benjamin. Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons, 2019.
  • [6] Lucas Beyer, Olivier J. Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. Are we done with imagenet?, 2020.
  • [7] Abeba Birhane and Fred Cummins. Algorithmic injustices: Towards a relational ethics. arXiv preprint arXiv:1912.07376, 2019.
  • [8] Colin Blain, Margaret Mackay, and Judith Tanner. Informed consent the global picture. British Journal of Perioperative Nursing (United Kingdom), 12(11):402–407, 2002.
  • [9] Margaret A Boden and Ernest A Edmonds. What is generative art? Digital Creativity, 20(1-2):21–46, 2009.
  • [10] Geoffrey C Bowker and Susan Leigh Star. Sorting things out: Classification and its consequences. MIT press, 2000.
  • [11] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
  • [12] Alexander L Brown, Jonathan Meer, and J Forrest Williams. Why do people volunteer? an experimental analysis of preferences for time donations. Management Science, 65(4):1455–1468, 2019.
  • [13] Claudia Bullerjahn, Katharina Heller, and Jan Hoffmann. How masculine is a flute? a replication study on gender stereotypes and preferences for musical instruments among young children. In Proceedings of the 14th International Conference on Music Perception and Cognition, pages 5–9, 2016.
  • [14] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77–91, 2018.
  • [15] Emma Carroll and Jessica Coates. The school girl, the billboard, and virgin: The virgin mobile case and the use of creative commons licensed photographs by commercial entities. Knowledge policy for the 21st century. A legal perspective, pages 181–204, 2011.
  • [16] François Chollet. Xception: Deep learning with depthwise separable convolutions. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 1251–1258, 2017.
  • [17] Christies. Is artificial intelligence set to become art’s next medium?, 2019. [Online; accessed 9-8-2019].
  • [18] American Kennel Club. List of breeds by group – american kennel club., Jan 2019. (Accessed on 05/31/2020).
  • [19] Creative Commons. Chang v. virgin mobile - creative commons., Jun 2013. (Accessed on 06/03/2020).
  • [20] Susan Corbett. Creative commons licences: A symptom or a cause? Available at SSRN 2028726, 2009.
  • [21] Susan Corbett. Creative commons licences, the copyright regime and the online community: Is there a fatal disconnect? The Modern Law Review, 74(4):503–531, 2011.
  • [22] Kate Crawford and Trevor Paglen. Excavating ai., Sep 2019. (Accessed on 04/30/2020).
  • [23] Judith K Delzell and David A Leppla. Gender association of musical instruments and preferences of fourth-grade students for selected instruments. Journal of research in music education, 40(2):93–103, 1992.
  • [24] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  • [25] Jiankang Deng, Jia Guo, Xue Niannan, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019.
  • [26] Jiankang Deng, Jia Guo, Zhou Yuxiang, Jinke Yu, Irene Kotsia, and Stefanos Zafeiriou. Retinaface: Single-stage dense face localisation in the wild. In arxiv, 2019.
  • [27] Executive departments and agencies of the federal government of the United States. ecfr — code of federal regulations., Jun 2020. (Accessed on 06/02/2020).
  • [28] Catherine D’Ignazio and Lauren F Klein. Data feminism. MIT Press, 2020.
  • [29] Fabio Henrique Kiyoiti dos Santos Tanaka and Claus Aranha. Data augmentation using gans. Proceedings of Machine Learning Research XXX, 1:16, 2019.
  • [30] Chris Dulhanty and Alexander Wong. Auditing imagenet: Towards a model-driven framework for annotating demographic attributes of large-scale image datasets. arXiv preprint arXiv:1905.01347, 2019.
  • [31] Chris Dulhanty and Alexander Wong. Investigating the impact of inclusion in face recognition training data on individual face identification, 2020.
  • [32] S. Durham. Opposing Pornography: A look at the Anti-Pornography Movement., 2015.
  • [33] Robert Eaglestone. One and the same? ethics, aesthetics, and truth. Poetics Today, 25(4):595–608, 2004.
  • [34] Editorial. Time to discuss consent in digital-data studies., July 2019. (Accessed on 06/02/2020).
  • [35] Virginia Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.
  • [36] Liyue Fan. Image pixelization with differential privacy. In IFIP Annual Conference on Data and Applications Security and Privacy, pages 148–162. Springer, 2018.
  • [37] Julie L Fishman. Is diamond smuggling forever-the kimberley process certification scheme: The first step down the long road to solving the blood diamond trade problem. U. Miami Bus. L. Rev., 13:217, 2004.
  • [38] Luciano Floridi. Translating principles into practices of digital ethics: five risks of being unethical. Philosophy & Technology, 32(2):185–193, 2019.
  • [39] Brendan J Frey and Delbert Dueck. Clustering by passing messages between data points. science, 315(5814):972–976, 2007.
  • [40] Bedapudi Praneeth Gant Laborde. Nsfw detection machine learning model., Jan 2019. (Accessed on 05/31/2020).
  • [41] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018.
  • [42] Alisdair A Gillespie. Tackling voyeurism: Is the voyeurism (offences) act 2019 a wasted opportunity? The Modern Law Review, 82(6):1107–1131, 2019.
  • [43] Hila Gonen and Yoav Goldberg. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, 2019.
  • [44] Mary L Gray and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Eamon Dolan Books, 2019.
  • [45] Jia Guo and Jiankang Deng. deepinsight/insightface: Face analysis project on mxnet., May 2020. (Accessed on 05/31/2020).
  • [46] Jules. Harvey, Adam. LaPlace. Megapixels: Origins, ethics, and privacy implications of publicly available face recognition image datasets, 2019.
  • [47] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [48] Robert A Heinlein. Stranger in a strange land. Hachette UK, 2014.
  • [49] Nicola Henry, Anastasia Powell, and Asher Flynn. Not just ‘revenge pornography’: Australians’ experiences of image-based abuse. A Summary Report, RMIT University, May, 2017.
  • [50] Danny Hernandez and Tom B. Brown. Measuring the algorithmic efficiency of neural networks, 2020.
  • [51] Aaron Hertzmann. Aesthetics of neural network art. arXiv preprint arXiv:1903.05696, 2019.
  • [52] Herkko Hietanen. Creative commons olympics: How big media is learning to license from amateur authors. J. Intell. Prop. Info. Tech. & Elec. Com. L., 2:50, 2011.
  • [53] Kashmir Hill. The Secretive Company That Might End Privacy as We Know It, 2020.
  • [54] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  • [55] Elizabeth C Hirschman. Consumers and their animal companions. Journal of consumer research, 20(4):616–632, 1994.
  • [56] Google Inc. Dataset search., Sep 2018. (Accessed on 06/17/2020).
  • [57] Khari Johnson. Aclu sues facial recognition startup clearview ai for privacy and safety violations | venturebeat, May 2020. (Accessed on 06/02/2020).
  • [58] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
  • [59] Maximilian Kasy and Rediet Abebe. Fairness, equality, and power in algorithmic decision making. Technical report, Working paper, 2020.
  • [60] Matthew Kay, Cynthia Matuszek, and Sean A Munson. Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3819–3828, 2015.
  • [61] Os Keyes. The misgendering machines: Trans/hci implications of automatic gender recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1–22, 2018.
  • [62] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [63] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982, 2018.
  • [64] Fei Fei Li and Jia Deng. Where have we been? where are we going?, Sep 2017. (Accessed on 05/01/2020).
  • [65] Clare McGlynn and Erika Rackley. More than revenge porn: image-based sexual abuse and the reform of irish law. Irish probation journal., 14:38–51, 2017.
  • [66] Clare McGlynn, Erika Rackley, and Ruth Houghton. Beyond revenge porn: The continuum of image-based sexual abuse. Feminist Legal Studies, 25(1):25–46, 2017.
  • [67] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
  • [68] Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, and Cynthia Rudin. Pulse: Self-supervised photo upsampling via latent space exploration of generative models, 2020.
  • [69] Ryan Merkley. Use and fair use: Statement on shared images in facial recognition ai - creative commons, Mar 2019. (Accessed on 06/03/2020).
  • [70] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. arXiv preprint arXiv:1810.03993, 2018.
  • [71] S Naidoo. Informed consent for photography in dental practice: communication. South African Dental Journal, 64(9):404–406, 2009.
  • [72] Arvind Narayanan and Vitaly Shmatikov. Robust de-anonymization of large sparse datasets. In 2008 IEEE Symposium on Security and Privacy (sp 2008), pages 111–125. IEEE, 2008.
  • [73] Safiya Umoja Noble. Algorithms of oppression: How search engines reinforce racism. nyu Press, 2018.
  • [74] Cathy O’neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
  • [75] Manamai Ozaki et al. Shashinjinsei: Nobuyoshi araki’s photo journey art and not or pornography. Art Monthly Australia, (211):17, 2008.
  • [76] Seymour A Papert. The summer vision project. AIM-100, 1966.
  • [77] European Parliament and of the Council. Eur-lex - 32016r0679 - en - eur-lex., Apr 2016. (Accessed on 04/30/2020).
  • [78] Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, and Kate Saenko. Visda: A synthetic-to-real benchmark for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2021–2026, 2018.
  • [79] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
  • [80] PimEyes. Face search • pimeyes., Jun 2020. (Accessed on 06/03/2020).
  • [81] Anastasia Powell. Configuring consent: Emerging technologies, unauthorized sexual images and sexual assault. Australian & New Zealand journal of criminology, 43(1):76–90, 2010.
  • [82] Anastasia Powell, Nicola Henry, and Asher Flynn. Image-based sexual abuse. In Routledge handbook of critical criminology, pages 305–315. Routledge, 2018.
  • [83] Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, and John Whaley. Fonts-2-handwriting: A seed-augment-train framework for universal digit classification. arXiv preprint arXiv:1905.08633, 2019.
  • [84] PyTorch. torchvision.datasets — pytorch 1.5.0 documentation., Jun 2020. (Accessed on 06/17/2020).
  • [85] Katyanna Quach. Inside the 1tb imagenet data set used to train the world’s ai: Naked kids, drunken frat parties, porno stars, and more • the register., Oct 2019. (Accessed on 05/01/2020).
  • [86] Michael Ramirez. “my dog’s just like me”: Dog ownership as a gender display. Symbolic Interaction, 29(3):373–391, 2006.
  • [87] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? arXiv preprint arXiv:1902.10811, 2019.
  • [88] Navid Rekabsaz, James Henderson, Robert West, and Allan Hanbury. Measuring societal biases in text corpora via first-order co-occurrence. arXiv:1812.10424 [cs, stat], Apr 2020. arXiv: 1812.10424.
  • [89] Rasmus Rothe, Radu Timofte, and Luc Van Gool. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision, 126(2-4):144–157, 2018.
  • [90] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
  • [91] Anantha Natarajan S. Imagescraper · pypi., May 2015. (Accessed on 06/17/2020).
  • [92] Anubhav Sachan. bingscraper · pypi., July 2018. (Accessed on 06/17/2020).
  • [93] Morgan Klaus Scheuerman, Jacob M Paul, and Jed R Brubaker. How computers see gender: An evaluation of gender classification in commercial facial analysis services. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–33, 2019.
  • [94] Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536, 2017.
  • [95] Joel Simon. Artbreeder., Jun 2020. (Accessed on 07/06/2020).
  • [96] Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. Machine learning models that remember too much. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 587–601, 2017.
  • [97] Susan Leigh Star and Geoffrey C Bowker. Enacting silence: Residual categories as a challenge for ethics, information systems, and communication. Ethics and Information Technology, 9(4):273–280, 2007.
  • [98] Pierre Stock and Moustapha Cisse. Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases. In Proceedings of the European Conference on Computer Vision (ECCV), pages 498–512, 2018.
  • [99] Lucy Suchman. Human-machine reconfigurations: Plans and situated actions. Cambridge university press, 2007.
  • [100] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843–852, 2017.
  • [101] Yi Chern Tan and L Elisa Celis. Assessing social and intersectional biases in contextualized word representations. In Advances in Neural Information Processing Systems, pages 13209–13220, 2019.
  • [102] TensorFlow. Tensorflow datasets., Jun 2020. (Accessed on 06/17/2020).
  • [103] Antonio Torralba, Rob Fergus, and William T Freeman.

    80 million tiny images: A large data set for nonparametric object and scene recognition.

    IEEE transactions on pattern analysis and machine intelligence, 30(11):1958–1970, 2008.
  • [104] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. From imagenet to image classification: Contextualizing progress on benchmarks, 2020.
  • [105] Amol Umrale. imagebot · pypi., July 2015. (Accessed on 06/17/2020).
  • [106] Haohan Wang, Songwei Ge, Eric P. Xing, and Zachary C. Lipton. Learning robust global representations by penalizing local predictive power, 2019.
  • [107] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.
  • [108] Paul Weindling. The origins of informed consent: the international scientific commission on medical war crimes, and the nuremberg code. Bulletin of the History of Medicine, pages 37–71, 2001.
  • [109] Sarah Myers West, Meredith Whittaker, and Kate Crawford. Discriminating systems., 2019.
  • [110] Wikipedia. Streisand effect - wikipedia., April 2020. (Accessed on 04/29/2020).
  • [111] Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. Predictive inequity in object detection. arXiv preprint arXiv:1902.11097, 2019.
  • [112] Elizabeth R Wrape, Alexandra L Dittloff, and Jennifer L Callahan. Gender and musical instrument stereotypes in middle school children: Have trends changed? Update: Applications of Research in Music Education, 34(3):40–47, 2016.
  • [113] Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, and Tong Zhang. Tencent ml-images: A large-scale multi-label image database for visual representation learning. IEEE Access, 7:172683–172693, 2019.
  • [114] Blaise Aguera y Arcas, Margaret Mitchell, and Alexander Todorov. Physiognomy’s new clothes. Medium (6 May 2017), online:< https://medium. com/@ blaisea/physiognomys-new-clothesf2d4b59fdd6a, 2017.
  • [115] Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 547–558, 2020.
  • [116] Jason Zervoudakes and Judith M Tanur. Gender and musical instruments: Winds of change? Journal of Research in Music Education, pages 58–67, 1994.
  • [117] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6):1452–1464, 2017.
  • [118] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.