In recent years computer vision researchers have made substantial progress towards automated visual recognition across a wide variety of visual domains [66, 31, 60, 77, 56, 76, 19]. However, applications are hampered by the fact that in the real world the distribution of visual classes is long-tailed, and state-of-the-art recognition algorithms struggle to learn classes with limited data . In some cases (such as recognition of rare endangered species) classifying rare occurrences correctly is crucial. Simulated data, which is plentiful, and comes with annotation “for free”, has been shown to be useful for various computer vision tasks [79, 59, 42, 62, 35, 70, 64, 58, 38]. However, an exploration of this approach in a long-tailed setting is still missing (see Section 2.4).
As a testbed, we focus on the effect of simulated data augmentation on the real-world application of recognizing animal species in camera trap images. Camera traps are heat- or motion-activated cameras placed in the wild to monitor animal populations and behavior. The processing of camera trap images is currently limited by human review capacity; consequently, automated detection and classification of animals is a necessity for scalable biodiversity assessment. A single sighting of a rare species is of immense importance. However, training data of rare species is, by definition, scarce. This makes this domain ideal for studying methods for training detection and classification algorithms with few training examples. We utilize a technique from  which tests performance at camera locations both seen (cis) and unseen (trans) during training in order to explicitly study generalization (see Section 3.1 for a more detailed explanation).
We investigate the use of simulated data as augmentation during training, and how to best combine real data for common classes with simulated data for rare classes to achieve optimal performance across the class set at test time. We consider four different data simulation methods (see Fig.11) and compare the effects of each on classification performance. Finally, we analyze the effect of both increasing the number of simulated images and controlling for axes of variation to provide best practices for leveraging simulated data for real-world performance gain on rare classes.
2 Related work
2.1 Visual Categorization Datasets
Large and well-annotated public datasets allow scientists to train, analyze, and compare the performance of different methods, and have provided large performance improvements over traditional vision approaches [73, 44, 41]
. The most popular datasets used for this purpose are ImageNet, COCO, PascalVOC, and OpenImages, all of which are human-curated from images scraped from the web[28, 53, 32, 48]. These datasets cover a wide set of classes across both the manufactured and natural world, and are usually designed to provide “enough” data per class to avoid the low-data regime. More recently researchers have proposed datasets that focus specifically on the natural world, which has a long-tailed distribution [77, 19, 50]. The iNaturalist 2018 dataset  encourages a focus on the long tail by including classes with little training data. Caltech Camera Traps  additionally introduced the challenge of learning in few locations, against constant backgrounds, and generalizing to new locations.
2.2 Handling Imbalanced Datasets
Imbalanced datasets often lead to bias in algorithm performance toward well-represented classes . Algorithmic solutions often use a non-uniform cost per misclassification (called cost-sensitive learning [30, 40, 39]) which encourage models to ‘focus’ on training examples from rare classes. The simplest version of this uses weighted loss, where each incorrect example incurs loss inversely proportional to the number of representatives of that class . For example, focal loss was recently proposed as a method for dealing with the large class imbalance innate to detection, where the majority of examples come from the background class .
Data solutions employ data augmentation, either by 1) over-sampling the minority classes, 2) under-sampling the majority classes, or 3) generating new examples for the minority classes. When using mini-batch gradient descent, oversampling the minority classes is similar to weighting these classes higher than the majority classes, as in cost-sensitive learning. Under-sampling the majority classes is non-ideal, as this may lose information about common classes. Our paper falls into the third category: augmenting the training data for rare classes. Data augmentation via pre-processing, using affine and photometric transformations, is a well-established tool for improving generalization [49, 43]. Data generation and simulation have begun to be explored as data augmentation methods, see Section 2.4.
Algorithmic and data solutions for imbalanced data are complementary, algorithmic advances can be used in conjunction with augmented training data.
2.3 Low-shot Learning
Low-shot learning attempts to learn categories from few examples . Wang and Herbert  learn to classify with small amounts of training data by regressing from small-dataset classifiers to large-dataset classifiers. Hariharan and Girshick  look specifically at ImageNet, using classes that are unbalanced, some with large amounts of training data, and some with little training data. Their proposed solution was beneficial within low-capacity models, but matched the performance of high-capacity models off-the-shelf. Metric learning learns a representation space where distance corresponds to similarity, and uses this as a basis for low-shot solutions . We consider the low-shot regime with regard to real data for our rare target class, but investigate the use of added synthetic data based on a human-generated articulated model of the unseen class during training instead of additional class-specific attribute labels at training and test time. This takes us outside of the traditional low-shot framework into the realm of domain transfer from simulated to real data.
2.4 Data Augmentation via Style Transfer, Generation, and Simulation
Image generation via generative adversarial networks (GANs) and recurrent neural networks (RNNs), as well as style transfer and image-to-image translation have all been considered as sources for data augmentation[22, 36, 45, 61, 75, 54, 83]. These techniques are valuable, but require large amounts of data to generate realistic images making them un-ideal solutions for low-data regimes. Though conditional generation allows for class-specific output, the results can be difficult to interpret or control.
Graphics engines such an Unreal [16, 81] and Unity  leverage the expertise of human artists and complex physics models to generate photorealistic simulated images, which can be used for data augmentation. Because ground truth is known at generation, simulated data has proved particularly useful for tasks requiring detailed and expensive annotation, such as keypoints, semantic segmentations, or depth information [79, 59, 42, 62, 35, 70, 64, 58, 38]. Varol 
use synthetically-generated humans placed on top of real image backgrounds as pretraining for human pose estimation, and find they get best results when fine-tuning a synthetically-trained model on limited real data. uses a combination of unlabeled real data and labeled simulated data of the same class to improve real-world performance on an eye-tracking task by using GANs  to improve the “realness” of their synthetic data. This method requires a large number of unlabled examples from the target class. [59, 42, 62] find that simulated data improves detection performance, and the degree of realism and variability of simulation affects the amount of improvement. They consider only small sets of non-deformable man-made objects. Richter  showed that a segmentation model for city scenes trained with a subset of their real dataset and a synthetic set outperforms a model trained with the full real dataset.  proposes a dataset and benchmark for evaluating models for unsupervised domain transfer from synthetic to real data with all-simulated training data, as opposed to simulated data only for classes with little representation. While this literature is encouraging, a number of questions are left unexplored. The first is a careful analysis of when simulated data is useful and, in particular, if it is useful in generalizing to new scenarios. Second, whether simulated data can be useful in highly complex and relatively unpredictable scenes such as natural scenes, as opposed to indoors and urban scenes. Third, whether it is just the synthetic objects or also the synthetic environments that contribute to learning.
2.5 Simulated Datasets
Bondi  previously released the AirSim-w data simulator within the domain of wildlife conservation, but it is focused on creating aerial infrared imagery. The resolution and quality of the assets is sufficient to replicate data from meters in the air, but is not realistic close-up. We contribute the first image data generators specifically for the natural world with the ability to recreate natural environments and generate near-photorealistic images of animals with real-world nuisance factors such as challenging pose, lighting, and occlusion within the scene. Our generators use high-quality 3D animated models to create realistic natural scenes at a depth of as little as one meter.
3 Data and Simulation
3.1 Real Data
Our real-world training and test data comes from the Caltech Camera Traps (CCT) dataset . CCT contains images from camera trap locations covering classes of animals, curated from data provided by the United States Geological Survey and the National Park Service. We follow the CCT-20 data split laid out in , which was explicitly designed for in-depth generalization analysis. The split uses a subset of images from camera locations covering classes in CCT to simultaneously investigate performance on locations seen during training and generalization performance to new locations. Bounding-box annotations are provided for all images in CCT-20, whereas the rest of CCT has only class labels. In the CCT-20 data split, cis-locations are defined as locations seen during training and trans-locations as locations not seen during training (see Fig.19). Nine locations are used for trans-test data, one location for trans-validation data, and data from the remaining
locations is split between odd and even days, with odd days as cis-test data and even days as training and cis-validation data (aof data from even days for training, for testing).
In order to study the effect of simulated data on rare species, we focus on deer, which are rare in CCT-20, with only deer examples out of the images in the training set (see Fig.19). In order to focus on the performance of a single rare class, we remove the other two rare classes in CCT-20: badgers and foxes. We noted that there were no deer images in the established CCT-20 trans sets. In reality, deer are far from uncommon: unlike a truly rare species, there exist sufficient images of deer in the CCT dataset outside of the CCT-20 locations to rigorously evaluate performance. To facilitate deeper investigation of generalization we collected bounding-box annotations for an additional K images from CCT across new locations, which we add to the trans-validation and trans-test sets to cover a wider variety of locations and classes (including deer). We call this augmented trans set trans+ (see Fig.19) and will release the annotations at publication. To further analyze generalization, we also test on data containing deer from the iNaturalist 2017 dataset , which represents a domain shift to human-captured and human-selected photographs. We consider Odocoileus hemionus (mule deer) and Odocoileus virginianus (white-tailed deer) images from iNaturalist, the two species of deer seen in the CCT data.
3.2 Synthetic Data
To assess generality we leverage multiple collections of woodland and animal models to create two simulation environments, which we call TrapCam-Unity and TrapCam-AirSim. Both simulation environments and source code to generate images will be provided publicly, along with the data generated for this paper. To synthesize daytime images we varied the orientation of the simulated sun in both azimuth and elevation. To create images taken at night we used a spotlight attached to the simulated camera to simulate a white-light or IR flash and qualitatively match the low color saturation of the nighttime images. To simulate animals’ eyeshine (a result of the reflection of camera flash from the back of the eye), we placed small reflective balls on top of the eyes of model animals.
In this generation method we create a modular natural environment within Microsoft AirSim  that can be randomly populated with flora and fauna. The distribution and types of trees, bushes, rocks, and logs can be varied and randomly seeded to create a diverse set of landscapes, from an open plain to a dense forest. We used various off-the-shelf components such as an animal pack from Epic Studios  (Animals Vol 01: Forest Animals by GiM ), background terrain also from Unreal Marketplace , vegetation from SpeedTree , and rocks/obstructions from Megascans . The actual area of the environment is small, at meters, but the modularity allows many possible scenes to be constructed.
In this generation method we take advantage of the “Book of The Dead” environment , a near-photorealistic, open-source forest environment published by Unity to demonstrate its high definition rendering pipeline. We move throughout the larger, fixed environment to collect data with various background scenes. We include animated deer models from five model sets, including the GiM models used in TrapCam-AirSim.
3.2.3 Simulated animals on empty images
Similar to the data generated in , we generate synthetic images of deer by rendering deer on top of real camera trap images containing no animals, which we call Sim on Empty (see Fig.11). We first generated animal foreground images by randomizing the location, orientation in azimuth, pose and illumination of the deer, then pasted the foreground images on top of the real empty images. A limitation is that the deer are not in realistic relationships or occlusion scenarios with the environment around them. We also note that the empty images used to construct this data come from both cis and trans locations, so Sim on Empty contains information about test-set backgrounds unavailable in the purely simulated sets. This choice is based on current camera trap literature, which first detects the presence of any animal, and then determines animal species [56, 19]. After the initial animal detection step, the empty images are known and can be utilized.
3.2.4 Segmented animals on empty images
We manually segmented the 44 examples of deer from the training set and pasted them at random on top of real empty camera trap images, which we call Real on Empty (see Fig.11). This allows us to analyze whether the generalization challenge is related to memorizing the training deer+background or memorizing the training deer regardless of background. Similar to the Sim on Empty set, these images do not have realistic foreground/background relationships and the empty images come from both cis and trans locations.
 showed that detecting and localizing the presence of an “animal” (where all animals are grouped into a single class) both generalizes well to new locations and improves classification performance. We focus on classification of cropped ground-truth bounding boxes as opposed to training multi-class detectors in order to disambiguate classification and detection errors. We are specifically investigating how added synthetic training data for rare classes effects model performance on both rare and common classes.
We determined that the Inception-Resnet-V2 architecture  worked best for the cropped-box classification task by comparing performance across architectures (see Supplementary Material). Most classification systems are pretrained on Imagenet, which contains animal classes. To ensure that our “rare” class was truly something the model was unfamiliar with, as opposed to something it had seen in pretraining, we pretrained our classifiers on no-animal ImageNet, a dataset we define by removing the “animal” subtree (all classes under synset node n) from ImageNet. We use an initial learning rate of
, RMSprop with a momentum of, and a square input resolution of . We employ random cropping (containing at least % of the region), horizontal flipping, color distortion, and blur as data augmentation. Model selection is performed using early stopping based on trans+ validation set performance .
4.1 Effect of increase in simulated data
-dimensional PCA over the activations at the last pre-logit layer of the network when running inference on the test sets, and then running 2-dimensional tSNE over the resulting PCA embedding.
We explore the trade-off in performance when increasing the number of simulated images, from to million (see Fig.20). Very little simulated data is needed to see a trans+ performance boost: with as few as simulated images we see a % decrease in per-class error on trans+ deer, with % increase in average per-class error on the other trans+ classes. As we increase the number of simulated images, trans+ performance improves: with K simulated images we see a % decrease in trans+ deer error, with % increase in error for the other trans classes. There exists some threshold (K) where, if passed, an increase in simulated data noticeably biases the classifier towards the deer class (see Fig.24): with million simulated images, our trans+ deer error decreases by %, but it comes at the cost of a % increase in average per-class error across the other classes. At this point there is an overwhelming class prior towards deer: the next-largest class at training time would be opossums with images, orders of magnitude less.
Surprisingly, cis deer performance decreases with added simulated data. Although the images were taken on different days (train from even days, cis-test from odd days) the animals captured were to some extent creatures of habit. This results in training and test images that are nearly identical from within the same locations (see Fig.16). Almost all cis test deer images have at least one visually similar training image. As simulated data is added at training time, the model is forced to learn a more complex, varied representation of deer. As a result, we see cis deer performance decrease. To quantify robustness, we ran the
K experiment three times. We found that trans+ deer error had a standard deviation ofand cis deer error had a standard deviation of , whereas the average error across other classes had a standard deviation of for both cis and trans.
We also investigate performance on deer images from iNaturalist , which are individually collected by humans and are usually relatively centered and well-focused (and therefore easier to classify) but represent a domain shift (see Fig.16). Adding simulated data improves performance on the iNaturalist deer images (see Fig.20), demonstrating the robustness and generality of the representation learned.
4.2 Effect of variation in simulation
In order to understand which aspects of the simulated data are most beneficial, we considered three dimensions of variation during simulation: pose, lighting, and animal model. Using the TrapCam-Unity simulator, 100K daytime simulated images were generated for each of these experiments. As a control, we created a set of data where the pose, lighting, and animal model were all fixed. We then created sets with varied pose, varied lighting, and varied animal model, each with the other variables held fixed. An additional set of data was generated varying all of the above. Unsurprisingly, widest variation results in the best trans+ deer performance. The individual axes of variation do have an effect of performance, and some are more “valuable” than others (see Fig.25). There are many more dimensions of variation that could be explored, such as simulated motion blur or variation in camera perspective. For CCT data, we find adding simulated nighttime images has the largest effect on performance. We have determined that for deer of training images, of cis test images, and of trans+ test images were captured at night, using either IR or white flash. Simulating only daytime images injects a prior towards deer being seen during the day. By training on half day and half night images we match the day/night prior for deer in the data. Not all species occur equally during the day or night, some are strictly nocturnal. Our results suggest that a good strategy is to determine the appropriate ratio of day to night images using your training set and match that ratio when adding simulated data.
4.3 Comparing simulated data generation methods
We compared the performance gain from 4 methods of data synthesis, using K added deer images for each (see Fig.26. The animal model is controlled (each simulated set uses the same GiM deer model for these experiments) for a fair comparison of the efficacy of each generation method. As a control, we consider oversampling of the rare class. This creates the same sampling prior towards deer without introducing any new information. Oversampling performs worse than just training on the unbalanced training set by causing the model to overfit the deer class to the training images. By manually segmenting out the deer in the training images and randomly pasting them onto empty backgrounds we see a large improvement in performance. Cis error goes down to % with this method of data augmentation, which makes sense in the view of the strong similarities between the training and cis-test data (see Fig.16).
Real on Empty and Sim on Empty are able to approximate both “day” and “night” imagery, a deer pasted onto a nighttime empty image is actually a reasonable approximation of an animal illuminated by a flash at night (see Fig.11). They also have the additional benefit of using backgrounds from both cis and trans sets, giving them trans information not provided by the simulated datasets. TrapCam-Unity with all variability enabled is our best-performing model without requiring additional segmentation annotations. If segmentation information is available, Real on Empty combined with TrapCam-Unity (K of each) improves both cis and trans deer performance: trans deer error decreases to (a % decrease compared to CCT only), with increase in error on trans other classes.
4.4 Visualizing the representation of data
In order to visualize how the network represents simulated data vs. real data, we used PCA and tSNE  to cluster the activations of the final pre-logit layer of the network. These visualizations can be seen in Fig.29. Interestingly, the model learns “deer” bimodally: simulated deer are clustered almost entirely separately from real deer, with a few datapoints of each ending up in the opposite cluster. Even though those clusters overlap only slightly, the network is surprisingly able to classify more deer images correctly.
5 Conclusions and Future Work
We explored using simulated data to augment rare classes during training. Towards this goal, we developed and compared multiple sources of natural-world data simulation, explicitly measured generalization via the cis-vs-trans paradigm, examined trade-offs in performance as the number of simulated images seen during training is increased, and analyzed the effect of controlling for different axes of variation and data generation methods.
From our experiments we draw three main lessons. First: using synthetic data can considerably reduce error rates for classes that are rare, and with segmentation annotations we can reduce error rates even further by additionally randomly pasting segmented images of rare classes on empty background images. Second: as the amount of simulated data is increased, accuracy on the target class improves. However, with x more simulated data than the common classes, we see negative effects on the performance of other classes due to the high class imbalance. Third: the variation of simulated data generated is very important, and maximum variation provides maximum performance gain.
While an increase in simulated data corresponds to an increase in target class performance, the representation of simulated data overlaps only rarely with the real data (see Fig.29). It remains to be studied whether embedding techniques , domain adaptation techniques [34, 84], or style transfer [35, 70] could be used to encourage a higher overlap in representation between the synthetic and real data, and whether that overlap would in fact lead to an increase in categorization accuracy. Additionally, the bias induced by adding large amounts of simulated data could be addressed with algorithmic solutions such as those in [25, 30, 40, 39]
. We did not discuss the drawback related to model training when using large quantities of synthetic data (increased epoch time, data storage, etc.). Another line of future work could explore the merger of a dataset simulator and the classifier so that highly variable synthetic data could be requested “online” without storing the raw frames.
Simulation is a fast, interpretable and controllable method of data generation that is easy to use and easy to adapt to new classes. This allows for an integrated and evolving training pipeline: simulated data can be generated iteratively based on needs or gaps in performance. Our analysis suggests a general methodology when using simulated data to improve rare-class performance: 1) generate small, variable sets of simulated data (even small sets can drive improvement), 2) add these sets to training and analyze performance to determine ideal ratios and dimensions of variation, 3) take advantage of ease and speed of generation to create an abundance of data based on this ideal distribution, and determine an operating point of number of added simulated images to optimize performance between rare target class and other classes based on the project goal.
Further, the performance gains we have demonstrated, along with the data generation tools we contribute to the community, will allow biodiversity researchers focused endangered species to improve classification performance on their target species. Adding each new species to the simulation tools currently requires the assistance of a graphics artist. However, automated 3D modeling techniques, such as those proposed in [46, 63, 24, 57], might eventually become an inexpensive and practical source of data to improve few-shot learning.
The improvement we have found in rare-class categorization is encouraging, and the release of our data generation tools and the data we have generated will provide a good starting point for other researchers studying imbalanced data, simulated data augmentation, or natural-world domains.
We would like to thank the USGS and NPS for providing data. This work was supported by NSFGRFP Grant No. 1745301, the views are those of the authors and do not necessarily reflect the views of the NSF. Compute provided by Microsoft AI for Earth and AWS.
-  4toon studio. https://assetstore.unity.com/publishers/3695. Accessed: 2019-03-27.
-  Blender. https://www.blender.org/. Accessed: 2019-03-28.
-  Book of the dead environment. https://assetstore.unity.com/packages/essentials/tutorial-projects/book-of-the-dead-environment-121175. Accessed: 2019-03-27.
-  Coyote in a camera trap. https://www.inaturalist.org/photos/7738216. Accessed: 2019-03-28.
-  Epic studios. http://epicstudios.com/. Accessed: 2019-03-21.
-  Forest animals by GiM. https://www.unrealengine.com/marketplace/en-US/animals-vol-01-forest-animals. Accessed: 2019-03-21.
-  GiM studio. https://assetstore.unity.com/publishers/18347. Accessed: 2019-03-27.
-  Janpec. https://assetstore.unity.com/publishers/1066. Accessed: 2019-03-27.
-  Maya. https://www.autodesk.com/products/maya/overview. Accessed: 2019-03-28.
-  Protofactor inc. https://assetstore.unity.com/publishers/265. Accessed: 2019-03-27.
-  Quixel megascans library. https://quixel.com/megascans. Accessed: 2019-03-21.
-  Red deer studio. https://assetstore.unity.com/publishers/12623. Accessed: 2019-03-27.
-  Speedtree. https://store.speedtree.com/. Accessed: 2019-03-21.
-  Unity book of the dead. https://unity3d.com/book-of-the-dead. Accessed: 2019-03-21.
-  Unity game engine. https://unity3d.com/. Accessed: 2019-02-05.
-  Unreal game engine. https://www.unrealengine.com/en-US/what-is-unreal-engine-4. Accessed: 2019-02-05.
-  Wdallgraphics studio. https://assetstore.unity.com/publishers/5060. Accessed: 2019-03-28.
-  Wolf in a camera trap. https://3c1703fe8d.site.internapcdn.net/newman/csz/news/800/2018/cameratrapst.jpg. Accessed: 2019-03-28.
-  S. Beery, G. Van Horn, and P. Perona. Recognition in terra incognita. In The European Conference on Computer Vision (ECCV), September 2018.
-  Y. Bengio. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade, pages 437–478. Springer, 2012.
-  E. Bondi, D. Dey, A. Kapoor, J. Piavis, S. Shah, F. Fang, B. Dilkina, R. Hannaford, A. Iyer, L. Joppa, et al. Airsim-w: A simulation environment for wildlife conservation with uavs. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, page 40. ACM, 2018.
K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan.
Unsupervised pixel-level domain adaptation with generative
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3722–3731, 2017.
M. Buda, A. Maki, and M. A. Mazurowski.
A systematic study of the class imbalance problem in convolutional neural networks.Neural Networks, 106:249–259, 2018.
-  T. J. Cashman and A. W. Fitzgibbon. What shape are dolphins? building 3d morphable models from 2d images. IEEE transactions on pattern analysis and machine intelligence, 35(1):232–244, 2013.
-  Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie. Class-balanced loss based on effective number of samples. arXiv preprint arXiv:1901.05555, 2019.
-  Y. Cui, F. Zhou, Y. Lin, and S. Belongie. Fine-grained categorization and dataset bootstrapping using deep metric learning with humans in the loop. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1153–1162, 2016.
-  C. R. de Souza12, A. Gaidon, Y. Cabon, and A. M. López. Procedural generation of videos to train deep action recognition networks. 2017.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
-  A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017.
The foundations of cost-sensitive learning.
International joint conference on artificial intelligence, volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd, 2001.
-  A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115, 2017.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
-  A. Gaidon, Q. Wang, Y. Cabon, and E. Vig. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4340–4349, 2016.
Y. Ganin and V. Lempitsky.
Unsupervised domain adaptation by backpropagation.In
International Conference on Machine Learning, pages 1180–1189, 2015.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
-  B. Hariharan and R. Girshick. Low-shot visual recognition by shrinking and hallucinating features. In Proc. of IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, 2017.
-  H. Hattori, V. N. Boddeti, K. Kitani, and T. Kanade. Learning scene-specific pedestrian detectors without real data. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3819–3827. IEEE, 2015.
-  H. He, Y. Bai, E. A. Garcia, and S. Li. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 1322–1328. IEEE, 2008.
-  H. He and E. A. Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge & Data Engineering, (9):1263–1284, 2008.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  S. Hinterstoisser, O. Pauly, H. Heibel, M. Marek, and M. Bokeloh. An annotation saved is an annotation earned: Using fully synthetic training for object instance detection, 2019.
-  A. G. Howard. Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402, 2013.
-  J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. In IEEE CVPR, 2017.
-  D. J. Im, C. D. Kim, H. Jiang, and R. Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016.
-  A. Kanazawa, S. Tulsiani, A. A. Efros, and J. Malik. Learning category-specific mesh reconstruction from image collections. In Proceedings of the European Conference on Computer Vision (ECCV), pages 371–386, 2018.
-  E. Kolve, R. Mottaghi, D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi. AI2-THOR: an interactive 3d environment for visual AI. CoRR, abs/1712.05474, 2017.
-  I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan, and K. Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2017.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. Lopez, and J. V. B. Soares. Leafsnap: A computer vision system for automatic plant species identification. In The 12th European Conference on Computer Vision (ECCV), October 2012.
-  F.-F. Li, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006.
-  T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. IEEE transactions on pattern analysis and machine intelligence, 2018.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
-  F. Luan, S. Paris, E. Shechtman, and K. Bala. Deep photo style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4990–4998, 2017.
-  L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
-  M. S. Norouzzadeh, A. Nguyen, M. Kosmala, A. Swanson, C. Packer, and J. Clune. Automatically identifying wild animals in camera trap images with deep learning. arXiv preprint arXiv:1703.05830, 2017.
-  F. Pahde, M. Puscas, J. Wolff, T. Klein, N. Sebe, and M. Nabi. Low-shot learning from imaginary 3d model. arXiv preprint arXiv:1901.01868, 2019.
-  X. Peng, B. Usman, K. Saito, N. Kaushik, J. Hoffman, and K. Saenko. Syn2real: A new benchmark forsynthetic-to-real visual domain adaptation. arXiv preprint arXiv:1806.09755, 2018.
-  B. Pepik, R. Benenson, T. Ritschel, and B. Schiele. What is holding back convnets for detection? In German Conference on Pattern Recognition, pages 517–528. Springer, 2015.
R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S.
Corrado, L. Peng, and D. R. Webster.
Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.Nature Biomedical Engineering, page 1, 2018.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
-  P. S. Rajpura, H. Bojinov, and R. S. Hegde. Object detection using deep cnns trained on synthetic images, 2017.
-  B. Reinert, T. Ritschel, and H.-P. Seidel. Animated 3d creatures from single-view video by skeletal sketching. In Graphics Interface, pages 133–141, 2016.
-  S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. Lecture Notes in Computer Science, page 102–118, 2016.
-  G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3234–3243, 2016.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
-  M. Savva, A. X. Chang, A. Dosovitskiy, T. Funkhouser, and V. Koltun. MINOS: Multimodal indoor simulator for navigation in complex environments. arXiv:1712.03931, 2017.
F. Schroff, D. Kalenichenko, and J. Philbin.
Facenet: A unified embedding for face recognition and clustering.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
-  S. Shah, D. Dey, C. Lovett, and A. Kapoor. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and service robotics, pages 621–635. Springer, 2018.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
-  S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser. Semantic scene completion from a single depth image. IEEE Conference on Computer Vision and Pattern Recognition, 2017.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi.
Inception-v4, inception-resnet and the impact of residual connections on learning.In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
-  T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31, 2012.
-  T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid. A bayesian data augmentation approach for learning deep models. In Advances in Neural Information Processing Systems, pages 2797–2806, 2017.
-  G. van Horn, J. Barry, S. Belongie, and P. Perona. The Merlin Bird ID smartphone app (http://merlin.allaboutbirds.org/download/).
-  G. Van Horn, O. Mac Aodha, Y. Song, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inaturalist challenge 2017 dataset. arXiv preprint arXiv:1707.06642, 2017.
-  G. Van Horn and P. Perona. The devil is in the tails: Fine-grained classification in the wild. arXiv preprint arXiv:1709.01450, 2017.
-  G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid. Learning from synthetic humans. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
-  Y.-X. Wang and M. Hebert. Learning to learn: Model regression networks for easy small sample learning. In European Conference on Computer Vision, pages 616–634. Springer, 2016.
-  Y. Z. S. Q. Z. X. T. S. K. Y. W. A. Y. Weichao Qiu, Fangwei Zhong. Unrealcv: Virtual worlds for computer vision. ACM Multimedia Open Source Software Competition, 2017.
-  Y. Wu, Y. Wu, G. Gkioxari, and Y. Tian. Building generalizable agents with a realistic and rich 3d environment. arXiv preprint arXiv:1801.02209, 2018.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2223–2232, 2017.
-  Y. Zou, Z. Yu, B. Vijaya Kumar, and J. Wang. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), pages 289–305, 2018.
7 Architecture Selection
To select a single classification architecture to use across our experiments, we trained three classifiers: ResNet-101 V2, Inception V3, and Inception-ResNet V2. All three classifiers were pretrained on no-animal ImageNet then trained on the Caltech Camera Traps (CCT) training set (described in the main paper, Section 3.1) with no added simulated images. We found that Inception-ResNet V2 performed best on deer in cis and trans scenarios (see Table 1), so we decided to use Inception-ResNet V2 as the base architecture for all further experiments.
|Cis Test||Trans+ Test|
|Resnet 101 V2||47.86||11.18||88.63||29.76|
|Inception Resnet V2||29.28||10.17||77.69||31.07|
8 Additional analysis
8.1 Analyzing the value of real images
We find that our simulated data is sufficient to learn to recognize some deer even without real examples, though the real examples give a large boost in performance. The performance breakdown can be seen in Table 2. These results are promising for both researchers studying zero-shot learning and biologists studying highly endangered species: it is possible to learn a species with no real training data. This avenue remains open for further study.
8.2 Comparing night and day performance
We further analyze the effect of day and night simulation by comparing three experiments: one trained with only simulated daytime images, one trained with only simulated nighttime images, and one trained with half day and half night (see Fig 30). We find that the models trained on only day and only night perform similarly on trans deer, and that the 50/50 split performs best on trans deer (highlighted region in Fig 30). Training on day or night alone gives us a 20% performance boost on trans deer, while training on both gives us a 40% performance boost. This suggests that the day and night simulated images help the classifier in complementary ways: day helps with day images and night helps with night images. Performance on other classes is not strongly effected. Cis performance is quite noisy, and performs best with no added simulated data, see Fig. 2 in the main paper for further analysis.
|Cis Test||Trans+ Test|
|Real Training Data||Deer||Other||Deer||Other|
|CCT train w/o deer||94.29||18.64||68.56||34.42|
|CCT train w/ deer||52.14||10.91||44.05||30.47|
|% decrease from real deer||44.7||41.5||35.7||11.5|
8.3 Investigating the effect of adding simulated data for a common class
In order to investigate how added simulated data might effect a common class, as opposed to a rare one, we created “coyote” simulated data with TrapCam-Unity, using rendered models of wolves as a proxy for coyotes. Off-the-shelf, high-quality wolf models were more widely available, and wolves and coyotes are visually very similar (see Fig.34). This is a coarse-grained experiment, and it remains to be seen what would happen if simulated data from two visually similar classes (wolves and coyotes) was added at the same time.
We find that adding simulated “coyote” data improves trans+ coyote performance slightly, while cis coyote performance remains the same. Unsurprisingly, for the deer class (which has few training examples) adding a large amount of simulated coyote data harms both cis and trans+ deer performance.
9 Creating Sim and Real on Empty Data
Alternative to the full synthetic methods of data generation with AirSim and Unity, we generated synthetic images by overlaying either simulated deer or real cropped deer on real empty background images from the CCT dataset (see Fig. 41).
For the Sim on Empty dataset generation, we posed either a stag or a doe deer from the GiM model set in front of a simulated camera in Unity. We randomized the animation, orientation in azimuth (0-360 degrees), position, direction of light orientation in azimuth (0-360 degrees), and elevation (20-90 degrees).
For the Real on Empty dataset, we manually segmented and cropped out the 44 instances of deer from the CCT training set. Then we pasted the cropped deer foreground images on top of empty camera trap images in random locations.
10 TrapCam-AirSim Details
It took time and thought to derive the overall requirements for the AirSim TrapCam environment. With a sizable number of potential biomes globally, we narrowed the scope of what we intended to build to a SW United States environment similar to what is seen in the CCT data. Eventually we settled on a sub-alpine woodland scene that is readily found across most of the Western/ Southwest US. A major requirement and challenge was how to get the most data out of a relatively small, but detailed, area - this was key to the project without expanding the size of the area of interest. The overall intent was to leverage Microsoft AirSim’s computer vision mode to move a pre-configured camera around the scene, providing varied background.
We used various off-the-shelf components such as an animal pack from Epic Studios  (Animals Vol 01: Forest Animals by GiM ), background terrain from Unreal Marketplace , vegetation from SpeedTree , and rocks/obstructions from Megascans . In other AirSim environments, the general scenery is fairly static with exception of particle effects (snow/rain/dust/etc). For this effort we wanted a method to vary the background, to replicate a variety of terrains within a single environment (see Fig.48). The actual area of the environment is small, at meters long, but the modularity allows many possible scenes to be constructed. The randomization was designed to facilitate artists by allowing them to make a list of different objects to randomize from. Those objects are prioritized based on their order on the list. The BiomeTerrain class generates them by tracing random areas across the field based on a global seed. If there’s space available it spawns the desired object. There are a number of object types available in TrapCam-AirSim; animal type, rocks, logs, grasses, shrubs, trees, and each type can be varied by density and distribution. Additionally, we provide 9 GiM animal models: deer (doe/stag), wolf, fox, rat, spider, bear, raccoon, and buffalo. The doe model was created by removing the antlers from the stag model with Maya , a common modeling tool. All animal objects were assigned segmentation IDs for efficient ground truth extraction.
We created a simple UI to vary parameters, along with a command line API for parameter configuration. The UI was constructed with Unreal Motion Graphics (UMG) Widgets and allows for future flexibility for modifications, DPI resolutions and platforms. The main core functionalities were created with C++ for better performance as a parent class for data-only blueprints, which allows the technical artists to easily swap assets for different environments without re-compiling the C++ code.
We started the requirements and scoping in mid-August 2018 with a go-ahead approximately 6 September, and produced a working prototype two weeks later, with continued development and refining through mid-October. A second phase late in the year modified the camera system to include flash capability, and animals were updated to provide eye-shine, and the UI was modified to include variability for that eye-shine.
11 TrapCam-Unity Details
The “Book of The Dead” environment  we use is published for free by Unity. As shown in Fig.58, the near-photorealistic environment simulates a large patch of forest in a valley with volumetric grass, a variety of high definition trees, logs, and bushes, as well as rocks and terrain. The environment is a irregular area of roughly 20,000 . It runs on a desktop PC in real time and enables us to generate large amounts of images efficiently.
To create daytime images we varied the orientation of the simulated sun in both azimuth and elevation. To create images taken at night we created a spotlight attached to the simulated camera to simulate a white-light or IR flash and qualitatively match the low color saturation of the night time images. To simulate animals’ eyeshine (a result of the reflection of camera flash from the tapetum lucidum), we placed small reflective balls on top of the eyes of model animals (see Fig.59).
For deer simulation, we used 17 animated deer models from 5 publishers on Unity (GiM, 4toon, Protofactor, Red Deer, Janpec). For coyote simulation, we used 5 models from 5 publishers (GiM, 4toon, Protofactor, Janpec, WDallgraphics). We created the GiM doe model by removing the antlers of the GiM stag model with Blender. For each of the animated models, we included an animation controller that contains several animation clips ranging from commonly seen behavior episodes like walking and eating, to rare occurrences like attacking and sleeping. During dataset generation, we randomly picked a clip for each instance of animals and freeze it at a random time point, then we move the cameras around to sample a static scene with animals and environment.
We had 300 seed locations and randomly placed animals in the vicinity of a subset of the seed locations. This process was repeated multiple times to simulate animals in random locations within the environment. A similar random placement process was used to determine the locations of the cameras. All images generated are in full HD resolution (1980 x 1080).
For ground truth generation, we turned off the lighting and rendered each instance of the animal in a unique color by replacing the original animal shader with an unlit shader. We then used customized python scripts to extract animal bounding boxes by extracting pixels with these unique colors.