Face Recognition: Primates in the Wild

04/24/2018 ∙ by Debayan Deb, et al. ∙ University of Chester Michigan State University 0

We present a new method of primate face recognition, and evaluate this method on several endangered primates, including golden monkeys, lemurs, and chimpanzees. The three datasets contain a total of 11,637 images of 280 individual primates from 14 species. Primate face recognition performance is evaluated using two existing state-of-the-art open-source systems, (i) FaceNet and (ii) SphereFace, (iii) a lemur face recognition system from literature, and (iv) our new convolutional neural network (CNN) architecture called PrimNet. Three recognition scenarios are considered: verification (1:1 comparison), and both open-set and closed-set identification (1:N search). We demonstrate that PrimNet outperforms all of the other systems in all three scenarios for all primate species tested. Finally, we implement an Android application of this recognition system to assist primate researchers and conservationists in the wild for individual recognition of primates.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In 2008, IUCN released a detailed report, Red List of Threatened Species, which concluded that global diversity is severely threatened [2]. IUCN found that 22% of all mammal species are ‘critically endangered’, ‘endangered’, or ‘vulnerable.’ Primates, as an order of mammals, are particularly threatened, with around 60% of all primate species and around 91% of all lemur species threatened by extinction [3][4]. Lemurs are native only to the island of Madagascar, where their forest habitat is being destroyed to make room for crops and feed illegal hardwood trade. [5]. Lemurs also fall prey to over-hunting as their meat is highly desired [2]. Similarly, the endangered golden monkey has endured extensive habitat loss and are now only found in a few national parks in Africa [6]

. Intervention is necessary to halt and reverse these population declines of endangered primates, and one such intervention lies in individualization of these animals through automated facial recognition. Improved recognition and tracking will benefit the long-term health and stability of these species in a number of ways by (i) enabling more efficient longitudinal study, (ii) eliminating harmful effects of traditional tracking methods, and (iii) combating illegal trafficking and trade. This study proposes a non-invasive method of automatic facial recognition for primates which will be shown to be just as effective for golden monkeys, chimpanzees and indeed, we believe, any primate.

(a)
(b)
Figure 1: Endangered Primates. (a) A lemur tagged and collared for tracking at Duke University Lemur Center [7]. (b) A female savannah baboon wearing a GPS collar used for mammal tracking study [8].

Recognition of animals in the wild is critical for understanding the evolutionary processes that guide biodiversity. Researchers must reliably recognize each individual animal in order to observe that animal’s variation within a population. Unique appearance-based cues, such as body size, presence of scars and marks, and coloring, are often used for interim studies [9] [10], but these attributes are subjective and vary over time. Therefore, they are unreliable in longitudinal studies, which are necessary for the study of long-term population health and behavior, group dynamics, and the heritability and effects of traits [11].

Biologists and anthropologists have started to adopt more objective and rigorous tracking methods, such as collars or tags (Figure 1). While these approaches have been successfully used in several long-term in the wild primate studies [12][13][14], they are problematic in a number of ways. First, the devices can be expensive ($400-$4,000 per animal [15]) and time-consuming to apply. Second, tagging requires capture of the animal, which has demonstrably negative effects - it can disrupt social behavior [16], and cause intense stress [17], injury [18], and even death [19]. For the above reasons, the ethics of these methods have come under question [20][21]. In contrast, automatic facial recognition is a promising method to accurately identify individuals with minimal risk to these already threatened species.

(a)
(b)
Figure 2: Trafficking in primates. (a) A caged capuchin monkey in Peru. [22]. (b) Two chimpanzees rescued from a smuggling operation in Kathmandu, Nepal [23].
Figure 3: Group of chimpanzees partying in the wild [24].

A third opportunity to safeguard these endangered primate species lies in the growing problem of trafficking. Primate trafficking is a booming business in which these animals are captured from the wild for shipment around the globe (Figure 2

). In the case of great apes, for example, it is estimated that 22,218 individuals were lost between 2005 and 2011 due to illegal trade 

[25]. In contrast, only 27 arrests were made in connection with such trade, indicating that little has been done to solve the problem [25]. There is evidence that this illegal trade of great apes has been increasing since 2011 [26]. If a captured individual can be identified, this will provide information about the animal’s origin, and may provide insight into their capture.

There is an urgent need for a non-invasive, reliable method of identifying individuals that can be easily employed in the field. Kuhl et al. proposed animal biometrics as a potential solution [27]. Computer-aided identification of individuals has been shown to be promising for wild animal populations such as cheetahs [28], tigers [29], giraffes [30], zebras [31], and penguins [32]. Primates are particularly promising as facial recognition targets because humans belong to the biological group known as primates. Humans are particularly close to great apes, as both are grouped together in one of the major groups of the primate evolutionary tree. Since primate facial structure is similar to that of humans (forward-facing eye sockets, small or absent snout), we expect that established human facial recognition techniques will generalize well to primate faces. Indeed, Freytag et al. worked on automatic individualization of chimpanzees in the wild [33] using Convolutional Neural Networks (CNN) and achieved  92% identification accuracy on a dataset containing 2,109 face images of 24 chimpanzees. Crouse et al. proposed a face recognition system for lemurs (LemurFaceID) using simple LBP features [34]. LemurFaceID focused on individual identification of 80 red-bellied lemurs from Duke Lemur Center, and correctly identified individuals at Rank-1 accuracy of (using 2-query image fusion). LemurFaceID solely focused on the identification scenario (1:N comparison). However, for automatic primate face recognition, validating whether a set of photographs belong to the same individual (1:1 comparison) is equally important.

The aforementioned studies have not been implemented in a manner where a human operator can quickly perform identification in the wild using, say, a mobile app. To that effect, researchers from a Cornell lab developed an application, Merlin Bird ID [35], that can identify the species of birds, though it does not support individual identification. In this paper, we propose a non-invasive, rapid, and robust method of automatic primate individual identification which has been implemented as an Android smartphone application for rapid deployment and use.

Concisely, the contributions of the paper are as follows:

  1. Evaluated lemur individual identification performance of state-of-the-art and open-source human face recognition systems, FaceNet111https://github.com/davidsandberg/facenet [36] and SphereFace222https://github.com/wy1iu/sphereface [37]333FaceNet and SphereFace achieve 99.65% and 99.42% accuracy on LFW dataset using the standard LFW protocol [1], respectively. on a dataset of 3,000 face images of 129 lemurs. SphereFace achieved an identification performance of 92.45% at Rank-1.

  2. Proposed a new CNN architecture (PrimNet) suitable for small datasets available for primate faces that is implemented on a mobile phone. PrimNet achieves 93.75% lemur individual identification accuracy at Rank-1.

  3. Demonstrated the generalization of PrimNet to other primates, such as chimpanzees and golden monkeys. PrimNet achieves Rank-1 accuracies of 90.26% and 75.82% for golden monkeys and chimpanzees, respectively.

  4. Implemented an Android app that can be used by primate researchers and conservationists in the wild for recognition (both 1:1 and 1:N) and tracking of primates.

  5. We plan to publicly open-source both the LemurFace and GoldenMonkeyFace datasets in order for other researchers to push the state-of-the-art in primate face recognition. In addition, the software for PrimNet, along with the mobile app, will also be open-sourced.

(a) Eulemur coronatus
(b) Propithecus coquereli
(c) Lemur catta
(d) Varecia variegata
(e) Eulemur collaris
(f) Eulemur mongoz
(g) Varecia rubra
(h) Eulemur rubriventer
(i) Eulemur flavifrons
Figure 4: Images of 9 out of 12 different lemur species in our dataset.
(a) Adam
(b) Dave
(c) Duncan
(d) Ella
Figure 5: Images of four different golden monkeys in our dataset: Adam, Dave, Duncan, and Ella.
(a) Coco
(b) Fredy
(c) Oscar
Figure 6: Images of three different chimpanzees in our dataset: Coco, Fredy, and Oscar.

2 Dataset

For our experiments, we acquired datasets of three different primates in the wild: lemurs, golden monkeys, and chimpanzees. In this paper, we refer to the datasets as LemurFace, GoldenMonkeyFace444Both LemurFace and GoldenMonkeyFace datasets are available for download at https://github.com/ronny3050/PrimateFaceRecognition., and ChimpFace555https://github.com/cvjena/chimpanzee_faces [38][33], respectively.

2.1 LemurFace Dataset

The LemurFace dataset contains 3,000 face images of 129 lemur individuals from 12 different species (Figure 4) which were photographed by one of the authors at the Duke Lemur Center in North Carolina. Images of lemurs were acquired using an 8 megapixel camera on a mid-range smartphone device, the LG Nexus 5666https://www.gsmarena.com/lg_nexus_5-5705.php. Lemurs were labeled according to the names given to them by the Duke Lemur Center (e.g. Alena, Ma’at, West). The eye and chin locations of lemurs were manually annotated by us and any image where both of the lemur’s eyes were not clearly visible is removed from the dataset, resulting in a total of 3,000 images. In addition, to account for variation in environmental conditions, we acquired images of the same lemur on two consecutive days. Each individual is photographed both indoors and outdoors. A histogram of the number of images per lemur individual is shown in Figure (a)a.

(a) Lemurs
(b) Golden Monkeys
(c) Chimpanzees
Figure 7: Histograms of the number of face images per (a) lemur, (b) golden monkey, and (c) chimpanzee. The total number of distinct lemurs, golden monkeys, and chimpanzees in LemurFace, GoldenMonkeyFace, and ChimpFace datasets are 129, 49, and 90, respectively.

 

LemurFace GoldenMonkeyFace ChimpFace
Number of Images 3,000 1,450 5,559
Number of Individuals 129 49 90
Number of images/individual [7, 42] [2, 120] [3, 315]
Average number of images/individual 23 30 63

 

Table 1: Summary of LemurFace, GoldenMonkeyFace, and ChimpFace datasets.

2.2 GoldenMonkeyFace Dataset

Our GoldenMonkeyFace dataset consists of 1,450 face images of 49 golden monkeys (Cercopithecus mitis kandti). A total of 241 short video clips (average duration of 6 seconds) were shot by one of the authors using a Nikon Coolpix B700777https://www.nikonusa.com/en/nikon-products/product/compact-digital-cameras/coolpix-b700.html at the Volcanoes National Park in Rwanda. Image frames were extracted from each of the video clips and were then cropped and aligned as described in section 3.1. Figures 5 and (b)b show example golden monkey face images and a histogram of the number of face images per golden monkey, respectively.

2.3 ChimpanzeeFace Dataset

Loos and Ernst provided two chimpanzee face datasets, C-Zoo and C-Tai, which were extended by Freytag et al. [38][33]. The C-Zoo dataset is comprised of 2,109 face images of 24 chimpanzees and 5,078 face images of 78 individuals are from the C-Tai dataset. Eye and mouth center locations are manually annotated for all the images by domain experts.

Due to the small number of individuals present in the C-Zoo dataset, we merged C-Zoo and C-Tai datasets to form ChimpFace and removed all individuals that have less than 3 face images. In total, we have 5,559 images of 90 chimpanzees. Figures 6 and (c)c show face images of a few chimpanzees from the ChimpFace dataset and a histogram of the number of face images per chimpanzee, respectively.

3 Methodology

In this section, we introduce the proposed system for aligning and matching the primate face photos. Then, we report experiments to evaluate our system and compare it with existing methods in Section 4.

(a) Original
(b) Aligned
(c) Original
(d) Aligned
(e) Original
(f) Aligned
Figure 8: Primate face images are aligned using a similarity transform.
Figure 9: Proposed PrimNet Architecture. A heat map of the intermediate representation of the input lemur’s face is shown below each intermediate layer of the network.

3.1 Face Alignment

The primary challenge in designing face recognition systems for primates is to first detect and then align the face images. Due to a lack of large primate face datasets of the three endangered species considered here, training a face detector specifically for them is not feasible. Face detection also comes with some additional challenges due to the presence of variations in hair and fur, low contrast between eyes and background, and variation in eye colors across different individuals. For these reasons, all the face images in our experiments are manually annotated with three landmarks, namely left eye, right eye and mouth center. These landmarks are used to construct a “landmark” template using the following procedure.

Let be the landmark locations for the image in the dataset, where left eye, right eye, and mouth center coordinates are denoted as , , and

. We generate a 6-element vector

, where and similarily for .

Then, we compute the “landmark template” for a dataset of images by

We represent a similarity transform by

where , , and are the scale, rotation, and translation parameters, respectively. To solve for the parameters, we rewrite the above as a system of linear equations, . To solve for the parameters, we obtain a least squares estimate through, , where . Figure 8 outlines the methodology for aligning primate face images. In a real-life setting, the user is expected to only manually annotate the three landmarks before submitting it to PrimNet for recognition.

3.2 PrimNet

To learn robust face representations for primates, we developed a new CNN architecture, which we call PrimNet. One of the requirements of deep neural networks is a sufficiently large dataset to learn numerous network parameters. For human faces, data of this scale is easy to obtain. For other primates, especially the endangered ones, the availability of face datasets is limited. We found that SphereNet-4 [37], one of the smaller face recognition networks, suffers from overfitting when trained on our primate datasets. Hence, we introduced two modifications to the SphereNet-4 architecture in designing the PrimNet:

  • Reduced the number of parameters by making the network sparser through the group convolution stratagem for all the layers [39], followed by channel shuffling [40].

  • Enhanced the discrimination power of hidden layers by making the network wider via increased number of channels.

In a traditional CNN architecture, each convolution filter applies to all the channels in the input feature map. But in group convolution, as in ShuffleNet [40], each convolution filter only applies to a subset of the input channels, thereby significantly reducing the number of parameters. It is important to note that if all the layers adopt group convolution, then the information in each group is isolated and never exchanged. Group shuffling operation after the convolution was proposed to handle this [40]. Through grouping and shuffling for the four convolution layers, PrimNet becomes a sparse network, with a total of only parameters. In comparison, Sphere-4 has parameters and ShuffleNet has around parameters. Reducing the number of filters limits the dimensionality of the intermediate layers, however, increasing the sparsity does not inhibit their representation power. Figure 9 illustrates the proposed network architecture. PrimNet is trained using the AM-Softmax function, which has been shown to be effective in learning human face representations [41].

4 Experiments

We evaluate the performance of PrimNet on three tasks: (i) verification, (ii) closed-set identification, and (iii) open-set identification. For each experiment, we evaluate the performance of primate individualization models using 5-fold cross-validation.

In our study, genuine comparisons are formed by choosing one face image from each primate individual’s imagery as the query image and comparing it to the same individual’s template, i.e

., all remaining face images of the individual. We repeat the query and template split until each image from each individual has been used as a query image. In a similar fashion, we form impostor comparisons by considering a query image of a primate individual and comparing it to the all other individuals’ templates. For both genuine and impostor comparisons, a similarity score is obtained by computing the cosine similarity between the corresponding feature vector. The highest similarity score within a template acts as the individual’s overall similarity score. In practical usage, the verification scenario is invaluable for gathering evidence of live primate trafficking. Suppose a photograph of a certain primate appears illegally for sale on a social media account, and a similar photo appears on a different account. The verification task can assist in confirming whether the two photographs belong to the same individual. Confirming an individual’s identity through verification can greatly aid in closing the loop on online primate trafficking by illuminating the network of smugglers and traders involved.

Verification accuracy is reported as the mean and standard deviation of True Accept Rates (TARs) at 1% and 0.1% False Accept Rates (FARs) across the 5 folds.

Identification (1:N search) searches a dataset (gallery) to determine the identity of an individual from a given probe (query) image. In closed-set identification, the probe individual is assumed to be enrolled in the gallery. Through closed-set identification, missing individuals can be identified and returned to the colony. In our experiments, closed-set identification is conducted by randomly choosing a face image from each primate individual as the probe image and the rest of the individual’s imagery are kept in the gallery. As in verification, the probe image is compared to each image within an individual’s template, and the highest similarity score from these comparisons is the individual’s overall similarity score. Then, the individual with the highest similarity score is considered to be the probe’s true mate in the gallery. The Cumulative Accuracy is computed as the fraction of correctly identified (retrieved) individuals at Rank 1. In the open-set identification, the individual in the query image may not be previously enrolled in the gallery and thus, the recognition system must be capable of indicating that the individual in the probe is not in the dataset. For open-set experiments, we extend the probe set by incorporating primate face images of individuals not present in the gallery. Detection and Identification Rate (DIR) at 1% FAR and Rank 1 retrieval accuracy is reported. In both closed-set and open-set identification scenarios, for each of the 5 folds, we run 100 trials of randomly splitting the test set into probe and gallery sets.

4.1 Baseline

To obtain a baseline performance, we evaluate the individualization accuracy of LemurFaceID [34] which is based on Local Binary Patterns (LBP) features [42]. Using a training set of 104 lemurs and a testing set of 25 lemurs, we achieve a baseline verification performance of 81.90% 3.69% TAR at 1% FAR and 90.82% 1.80% closed-set identification accuracy at Rank-1 across the five folds. Table 2 summarizes the results.

 

Lemurs Golden Monkeys Chimpanzees
Method Verification Closed-set Open-set Verification Closed-set Open-set Verification Closed-set Open-set

 

1% FAR Rank-1 Rank-1 1% FAR Rank-1 Rank-1 1% FAR Rank-1 Rank-1

 

Baseline [34] 81.90 3.69 90.82 1.80 N/A 74.88 6.75 89.33 7.68 N/A 44.62 4.38 70.16 3.36 N/A
SphereFace-20 [37] 79.40 5.82 92.45 1.67 80.83 4.48 65.18 12.28 87.32 4.57 61.15 12.80 48.62 6.23 75.49 3.80 30.75 12.41
SphereFace-4 [37] 73.6 5.81 90.18 1.37 72.29 9.49 72.53 6.57 87.49 3.77 69.43 9.27 53.92 2.57 74.19 3.74 35.85 8.22
FaceNet [36] 55.52 7.88 87.06 9.63 56.12 1.93 50.12 15.31 73.47 8.81 49.69 9.54 17.89 7.93 59.75 8.64 4.86 3.38
PrimNet 83.11 5.31 93.76 0.90 81.73 2.36 78.72 5.80 90.36 0.92 66.11 7.99 59.87 3.34 75.82 1.25 37.08 11.22

 

Table 2: Performance on three different primates: Lemurs, Golden Monkeys, and Chimpanzees.

 

Method Inference Speed (ms / img) Model Size (MB)

 

SphereFace-20 [37] 17.26 87
SphereFace-4 [37] 13.05 48
FaceNet [36] 40.42 90
PrimNet 23.58 3.9

 

Table 3: Inference speed and model size of different networks.
Figure 10: Verification accuracy with respect to varying number of images per template. Performance increases with an increased number of images in a template.

4.2 Human FR to Primate FR

Since we have related the primate face recognition problem to human face recognition, one might wonder whether CNNs trained for human faces are also suited for the primate faces. We evaluate the performance of SphereFace and FaceNet on LemurFace, GoldenMonkeyFace, and ChimpFace datasets by finetuning the two state-of-the-art human face recognition networks. We use pre-trained network parameters for SphereFace and FaceNet888SphereFace is trained on 494,414 face images of 10,575 humans (CASIA WebFace [43]) and FaceNet is trained on 3.31 million face images of 9,131 humans (VGGFace2 [44]). as initialization for the lemur individualization task. For SphereFace, we use 20 and 4 hidden layer models, denoted as SphereFace-20 and SphereFace-4, respectively.

To illustrate this idea, we show the performance of fine tuned SphereNet and FaceNet on lemur face data. For each of the five folds, 104 lemurs are used for training and the remaining 25 are kept for testing. In the verification scenario, there are 625 genuine comparison scores and 15,625 impostor comparison scores in each fold. For open-set identification, we extend the probe set by including 953 images of 449 lemur individuals downloaded from the internet. Table 2 reports the evaluation results on LemurFace. We conclude that even though human face recognition systems can be finetuned for use with lemurs, achieving acceptable face recognition performance for primates in the wild requires further enhancement.

(a) Lemurs (b) Golden Monkeys (c) Chimpanzees
Figure 11: Example cases where PrimNet fails to verify primate individuals. Top row: Two distinct individuals that are falsely accepted at 1% FAR. Bottom row: Same individuals that are falsely rejected at 1% FAR. Green box denotes the probe image and two images from the template are shown within a red box. These errors are caused primarily due to poor quality of the query, change in expression and viewpoint.
(a) Species Selection
(b) Verification
(c) Identification
Figure 12: Screenshots from the PrimNet Android application.

4.3 PrimNet: Lemurs

The PrimNet architecture is trained on LemurFace dataset from scratch. Table 2 summarizes the results. Performance of PrimNet is superior to those of baseline networks: LemurFaceID, SphereFace, and FaceNet.

To understand the variation in verification performance across the five folds, we plot the TAR at 1% FAR with respect to varying number of images in the template. As expected, Figure 10 shows that as the number of images in the template increases, the verification accuracy improves. For reliable verification, it is recommended to keep at least 15 images in a lemur’s template. It is important to note that we currently use a single probe image during verification. Indeed, increasing the number of probe images for verification can further enhance the verification performance.

4.4 PrimNet: Golden Monkeys

We used 39 individuals for training and the remaining 10 individuals for testing. In each fold, we have approximately 280 genuine and 2,520 impostor comparison scores. For each of the 100 trials, we have 10 probe images and 270 gallery images in the gallery, across the five folds, for closed-set identification performance. See Table 2 on the performance of PrimNet and other networks on the GoldenMonkeyFace dataset.

4.5 PrimNet: Chimpanzees

Using 5-fold cross-validation, training and testing datasets for ChimpFaces consists of 72 and 18 chimpanzees, respectively. For each fold, we compute 1,259 genuine and 21,403 impostor scores. For closed-set identification, we have 18 chimp face images in the probe set and around 1,241 gallery images, across the five folds. We find that PrimNet outperforms other networks in Table 2.

Figure 11 shows examples of the failure cases, which are primarily caused due to poor quality probe image. Extreme variations in an individual’s pose can adversely affect the verification performance. From Table 3, we find that PrimNet achieves inference speed comparable to other state-of-the-art networks (24 ms per image) while maintaining high accuracy. In addition, the greater advantage of PrimNet is in its size. With a mere storage space requirement of 3.9 MB, PrimNet is well suited for deployment on embedded systems such as smartphones.

5 Mobile App

We developed an Android mobile application which can be used for primate individualization in the wild999The application source code can be found at https://github.com/ronny3050/PrimateFaceRecognitionAndroid.. We trained the PrimNet architecture on the entire LemurFace, GoldenMonkeyFace and ChimpFace datasets. Currently, the app offers the user a choice to individualize one of the three primates (See Figure (a)a). On choosing the primate of interest, the app loads the gallery of individuals currently in the dataset. The user may wish to either (i) verify whether a set of images belong to the same individual, or (ii) identify the individual in a given probe image by searching the gallery. In identification mode, the top three ranks from the possible candidates list are displayed to the user with the associated similarity scores. In verification mode, results are given by the similarity score between the query and template. Screenshots for verification and identification modes are shown in Figures (b)b and (c)c.

6 Conclusion

We have designed a new primate face recognition network, PrimNet, using a convolutional neural network (CNN) architecture. We compared the performance of PrimNet to a benchmark primate recognition system, LemurFaceID, as well as two open-source human face recognition systems, SphereFace [37] and FaceNet [36]. We evaluated the systems on three primate datasets: LemurFace, GoldenMonkeyFace, and ChimpFace. The performance of PrimNet was superior to the other networks in both verification (1:1 comparison) and identification (1:N search) scenarios.

As primate species are threatened by habitat loss, hunting, and trafficking, it is imperative that primate researchers and conservationists have efficient and effective tools to reliably and safely monitor these animals. We believe the PrimNet primate face recognition system can greatly aid in these efforts to ensure that these endangered animals are protected. Through our collaborations with domain experts and field researchers, we plan to enlarge our primate datasets to further improve the recognition accuracy and to even develop a primate face detector. In addition, we also plan on evaluating PrimNet on datasets comprising of other endangered primate species.

7 Acknowledgement

The authors would like to express their gratitude to Duke Lemur Center for their assistance in LemurFace dataset acquisition101010 http://lemur.duke.edu/. We also acknowledge Daniel Stiles111111 https://freetheapes.org/ and Dr. Alison Fletcher for their guidance and support. In addition, we thank Dian Fossey Gorilla Fund International 121212https://gorillafund.org/ and Rwanda Development Board 131313http://rdb.rw/ for their support on the work with golden monkeys.

References

  • [1] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report.
  • [2] Jean-Christopher Vié, Craig Hilton-Taylor, and Simon N. Stuart. Wildlife in a Changing World: An Analysis of the 2008 IUCN Red List of threatened species. IUCN, 2009.
  • [3] Alejandro Estrada et. al. Impending extinction crisis of the world’s primates: Why primates matter. Science Advances, 3(1), 2017.
  • [4] Live Science Staff. Lemurs named world’s most endangered mammals. https://www.livescience.com/21592-madagascar-lemurs-endangered.html, 2012.
  • [5] Live Science Staff. Crisis in madagascar: 90 percent of lemur species are threatened with extinction. https://blogs.scientificamerican.com/extinction-countdown/crisis-in-madagascar-90-percent-of-lemur-species-are-threatened-with-extinction/, 2012.
  • [6] Colin A Chapman, Michael J Lawes, and Harriet AC Eeley. What hope for african primate diversity? African Journal of Ecology, 44(2):116–133, 2006.
  • [7] Kenneth E. Glander and Andrea Novicki. Visualizing an animal’s movement in real-time. https://learninginnovation.duke.edu/blog/2007/05/visualizing-movement/, 2007.
  • [8] Stony Brook University. Mammals Moving Less in Human Landscapes May Upset Ecosystems. http://www.stonybrook.edu/happenings/research/mammals-moving-less-in-human-landscapes-may-upset-ecosystems/, 2018.
  • [9] Carson M Murray, Margaret A Stanton, Kaitlin R Wellens, Rachel M Santymire, Matthew R Heintz, and Elizabeth V Lonsdorf. Maternal effects on offspring stress physiology in wild chimpanzees. American Journal of Primatology.
  • [10] Serge A Wich, S Suci Utami-Atmoko, T Mitra Setia, Herman D Rijksen, C Schürmann, J.A. Van Hooff, and Carel P van Schaik. Life history of wild sumatran orangutans (pongo abelii). Journal of Human Evolution, 47(6):385–398, 2004.
  • [11] Tim Clutton-Brock and Ben C Sheldon. Individuals and populations: the role of long-term, individual-based studies of animals in ecology and evolutionary biology. Trends in Ecology & Evolution, 25(10):562–573, 2010.
  • [12] Sarie Van Belle, Eduardo Fernandez-Duque, and Anthony Di Fiore. Demography and life history of wild red titi monkeys (callicebus discolor) and equatorial sakis (pithecia aequatorialis) in amazonian ecuador: A 12-year study. American Journal of Primatology, 78(2):204–215, 2016.
  • [13] Patricia C Wright. Demography and life history of free-ranging propithecus diadema edwardsi in ranomafana national park, madagascar. International Journal of Primatology, 16(5):835, 1995.
  • [14] Kara G Leimberger and Rebecca J Lewis. Patterns of male dispersal in verreaux’s sifaka (propithecus verreauxi) at kirindy mitea national park. American Journal of Primatology, 2015.
  • [15] Wildlife ACT. GPS and VHF tracking collars used for wildlife monitoring. https://wildlifeact.com/blog/gps-and-vhf-tracking-collars-used-for-wildlife-monitoring/, 2014.
  • [16] Steeve D Côté, Marco Festa-Bianchet, and François Fournier. Life-history effects of chemical immobilization and radiocollars on mountain goats. The Journal of Wildlife Management, pages 745–752, 1998.
  • [17] Michael D Wasserman, Colin A Chapman, Katharine Milton, Tony L Goldberg, and Toni E Ziegler. Physiological and behavioral effects of capture darting on red colobus monkeys (procolobus rufomitratus) with a comparison to chimpanzee (pan troglodytes) predation. International Journal of Primatology, 34(5):1020–1031, 2013.
  • [18] Elena P Cunningham, Steve Unwin, and Joanna M Setchell. Darting primates in the field: a review of reporting trends and a survey of practices and their effect on the primates involved. International Journal of Primatology, 36(5):911–932, 2015.
  • [19] Tom P Moorhouse and David W MacDonald. Indirect negative impacts of radio-collaring: sex ratio variation in water voles. Journal of Applied Ecology, 42(1):91–98, 2005.
  • [20] Steven J. Cooke, Vivian M. Nguyen, Karen J. Murchie, Jason D. Thiem, Michael R. Donaldson, Scott G. Hinch, Richard S. Brown, and Aaron Fisk. To tag or not to tag: Animal welfare, conservation, and stakeholder considerations in fish tracking studies that use electronic tags. Journal of International Wildlife Law & Policy, 16(4):352–374, 2013.
  • [21] Steven J. Cooke, Vivian M. Nguyen, Karen J. Murchie, Jason D. Thiem, Michael R. Donaldson, Scott G. Hinch, Richard S. Brown, and Aaron Fisk. To tag or not to tag: Animal welfare, conservation, and stakeholder considerations in fish tracking studies that use electronic tags. Journal of International Wildlife Law & Policy, 16(4):352–374, 2013.
  • [22] Shreya Dasgupta. Conservationists use social media to take on Peru’s booming illegal wildlife trade. https://news.mongabay.com/2014/09/conservationists-use-social-media-to-take-on-perus-booming-illegal-wildlife-trade/, 2014.
  • [23] Bhadra Sharma and Kai Schultz. Rescued chimpanzees face an uncertain future in Nepal. New York Times, Dec 2017.
  • [24] National Geographic. Family time at gombe. https://www.nationalgeographic.com/photography/photo-of-the-day/2014/7/chimpanzee-goodall-gombe-tanzania/, 2014.
  • [25] Daniel Stiles, Ian Redmond, Doug Cress, Christian Nellemann, and Rannveig Knutsdatter Formo. Stolen apes: The illicit trade in chimpanzees, gorillas, bonobos, and orangutans: A rapid response assessment. https://www.occrp.org/images/stories/food/RRAapes_screen.pdf, 2013.
  • [26] Daniel Stiles. Great ape: trafficking — an expanding extractive industry. https://news.mongabay.com/2016/05/great-ape-trafficking-expanding-extractive-industry/, 2016.
  • [27] Hjalmar S Kühl and Tilo Burghardt. Animal biometrics: quantifying and detecting phenotypic appearance. Trends in Ecology & Evolution, 28(7):432–441, 2013.
  • [28] Marcella J Kelly. Computer-aided photograph matching in studies using individual identification: an example from serengeti cheetahs. Journal of Mammalogy, 82(2):440–449, 2001.
  • [29] Lex Hiby, Phil Lovell, Narendra Patil, N Samba Kumar, Arjun M Gopalaswamy, and K Ullas Karanth. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins. Biology Letters, pages rsbl–2009, 2009.
  • [30] Douglas T Bolger, Thomas A Morrison, Bennet Vance, Derek Lee, and Hany Farid. A computer-assisted system for photographic mark–recapture analysis. Methods in Ecology and Evolution, 3(5):813–822, 2012.
  • [31] Mayank Lahiri, Chayant Tantipathananandh, Rosemary Warungu, Daniel I Rubenstein, and Tanya Y Berger-Wolf. Biometric animal databases from field photographs: identification of individual zebra in the wild. In Proceedings of the 1st ACM International Conference on Multimedia Retrieval, page 6. ACM, 2011.
  • [32] T Burghardt and NW Campbell. Animal identification using visual biometrics on deformable coat patterns. In

    International Conference on Computer Vision Systems

    , 2007.
  • [33] Alexander Freytag, Erik Rodner, Marcel Simon, Alexander Loos, Hjalmar S Kühl, and Joachim Denzler. Chimpanzee faces in the wild: Log-euclidean cnns for predicting identities and attributes of primates. In

    German Conference on Pattern Recognition

    , pages 51–63. Springer, 2016.
  • [34] David Crouse, Rachel L Jacobs, Zach Richardson, Scott Klum, Anil Jain, Andrea L Baden, and Stacey R Tecot. Lemurfaceid: a face recognition system to facilitate individual identification of lemurs. BMC Zoology, 2(1):2, 2017.
  • [35] Merlin Bird ID App. http://merlin.allaboutbirds.org/, 2014.
  • [36] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, pages 815–823, 2015.
  • [37] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017.
  • [38] Alexander Loos and Andreas Ernst. An automated chimpanzee identification system using face detection and recognition. EURASIP Journal on Image and Video Processing, 2013(1):49, 2013.
  • [39] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
  • [40] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv:1707.01083, 2017.
  • [41] Feng Wang, Weiyang Liu, Haijun Liu, and Jian Cheng. Additive margin softmax for face verification. arXiv preprint arXiv:1801.05599, 2018.
  • [42] Timo Ojala, Matti Pietikainen, and Topi Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on pattern analysis and machine intelligence, 24(7):971–987, 2002.
  • [43] Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.
  • [44] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. arXiv preprint arXiv:1710.08092, 2017.