Deep learning tools for the measurement of animal behavior in neuroscience

09/30/2019 ∙ by Mackenzie W. Mathis, et al. ∙ Harvard University 28

Recent advances in computer vision have made accurate, fast and robust measurement of animal behavior a reality. In the past years new tools specifically designed to aid the measurement of behavior in laboratories have come to fruition. Here we discuss how capturing the postures of animals over time is a key step in transforming videos into lower dimensional representations of behavior. We envision that the fast-paced development of new deep learning tools will rapidly change the landscape of realizable real-world neuroscience experiments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 6

page 7

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Behavior is the most important output of the underlying neural computations in the brain. Behavior is complex, often multi-faceted, and highly context dependent both in how con-specifics or other observers understand it, as well as how it is emitted. The study of animal behavior - ethology - has a rich history rooted in the understanding that behavior gives an observer a unique look into an animal’s umwelt (1, 2, 3); what are the motivations, instincts, and needs of an animal? What survival value do they provide? In order to understand the brain, we need to measure behavior in all its beauty and depth, and distill it down into meaningful metrics. Observing and efficiently describing behavior is a core tenant of modern ethology, neuroscience, medicine, and technology.

In 1973 Tinbergen, Lorenz, and von Frisch were the first ethologists awarded the Nobel Prize in Physiology or Medicine for their pioneering work on the patterns of individual and social group behavior (4). The award heralded a coming-of-age for behavior, and how rigorously documenting behavior can influence how we study the brain (4). Manual methods are powerful, but also highly labor intensive and subject to the limits of our senses. Matching (and extending) the capabilities of biologists with technology is a highly non-trivial problem (5, 6), yet harbors tremendous potential. How does one compress an animal’s behavior over long time periods into meaningful metrics? How does one use behavioral quantification to build a better understanding of the brain and an animal’s umwelt (1)?

In this review, we discuss ongoing challenges and advances in animal pose estimation, the method for measuring the posture. Posture refers to the geometrical configuration of body parts. While there are many ways to record behavior 

(7, 8, 9), videography is a non-invasive way to observe the posture of animals. Estimated poses across time can then, depending on the application, be transformed into kinematics, dynamics, and actions (10, 5, 7, 6). Due to the low-dimensional nature of posture, all these applications are computationally tractable.

A very brief history of pose estimation

The postures and actions of humans and animals have been documented as far back as cave paintings, illustrating the human desire to distill the essence of an animal for conveying information. As soon as it was possible to store data on a computer, researchers have built systems for automated analysis (10, 7, 6)

. Over time, these systems reflected all flavors of artificial intelligence from rule-based via expert systems, to machine learning 

(11, 12). Traditionally posture was measured by placing markers on the subject (3), or markerlessly by using body models (i.e. cylinder-based models with edge features (13)). Other computer vision techniques, such as using texture or color to segment the person from the background to create silhouettes (14, 15), or using so-called hand-crafted features with decoders (7, 11, 12, 16) were also popular before deep learning flourished.

The deep learning revolution for posture

Pose estimation is a challenging problem, but it has been tremendously advanced in the last half-a-decade due to advances in deep learning. Deep neural networks (DNNs) are computational algorithms that consist of simple units, which are organized in layers and then serially stacked to form “deep networks". The connections between the units are trained on data and therefore learn to extract information from raw data in order to solve tasks. The current deep learning revolution started with achieving human-level accuracy for object recognition on the ImageNet challenge, a popular benchmark with many categories and millions of images (17, 16). A combination of large annotated data sets, sophisticated network architectures, and advances in hardware made this possible and quickly impacted many problems in computer vision (see reviews (18, 12, 16)).

Figure 1: 2D pose estimation, 3D pose estimation & dense representations of humans and other animals: a: Example 2D multi-human pose estimation from OpenPose (19). b: Example 3D human pose estimation from (20). c: Dense representations of humans with DensePose, adapted from Guler et al. (21). d: Animals have diverse bodies and experimenter’s are often interested in specific key points, making tailored network attractive solutions. DNNs open a realm of possibles: from mice to cuttlefish. e: 3D pose estimation requires multiple cameras views, or 2D to 3D “lifting". f: The new SMALST model which fits full 3D models to images from Zuffi et al. (22) applied to zebras.

2D and 3D (human) pose estimation

In 2014 “DeepPose" was the first paper to apply deep learning to human 2D pose estimation (23), and immediately new networks were proposed that improved accuracy by introducing a translation invariant model (24), and convolutional networks plus geometric constraints (25, 26)

. In the few years since, numerous human pose estimation papers (approx. 4,000 on Google Scholar), new benchmarks with standardized datasets and evaluation metrics appeared, which allow better comparisons of “state-of-the-art" performance. This culture has driven rapid and remarkable increases in performance: from  44% of body parts correctly labeled to nearly 94% - with the top 15 being within a few percentage points of each other (Figure 

1a)  (27, 28, 29, 19, 30, 31). The history and many advances in 2D human pose estimation are comprehensively reviewed in (11, 32).

3D human pose estimation is a more challenging task and 3D labeled data is more difficult to acquire. There has been massive improvements in networks; see review (33). Currently, the highest accuracy is achieved by using multiple 2D views to reconstruct a 3D estimate (Figure 1b; (34, 20)), but other ways of “lifting" 2D into 3D are being actively explored.

Challenges still remain for using deep learning for 2D and 3D human pose estimation (35, 36, 32, 37, 16, 38). Occlusions, accurate 3D-from-2D images, and robustness are all areas of activate development across a multitude of network types. For example, robustness and generalization are some of the largest challenges - it is hard to train networks to generalize to out-of-domain scenarios, i.e., on data that is different from the training set (39, 40, 41). Thus, even though very large datasets have been amassed to build robust DNNs, they can fail in sufficiently different scenarios. Plus, while typically code is shared, which has tremendously fueled progress, the code is usually built around benchmarking data sets, rather than allowing the easy adaptation to user-specific tracking problems (i.e. tools to label, create and curate datasets).

Dense-representations of bodies

Other video-based approaches for capturing the posture and soft tissue of humans (and other animals) also exist. Depth-cameras such as the Microsoft Kinect have been used in humans (42, 43) and rodents (44, 45). Recently dense-pose representations, i.e. 3D point clouds or meshes (Figure 1c), have become a popular and elegant way to capture the soft-tissue and shape of bodies, which are highly important features for person identification, fashion (i.e. clothing sales), and in medicine (21, 46, 47). However, state-of-the-art performance currently requires body-scanning of many subjects to make body models. Typically, large datasets are collected to enable the creation of robust algorithms for inference on diverse humans (or for animals, scanning toy models has been fruitful (48)). Recently, outstanding improvements have been made to capture shapes of animals from images (49, 50, 22). However, there are no animal-specific toolboxes geared towards neuroscience applications, although we believe that this will change in the near future, as for many applications having the soft-tissue measured will be highly important, i.e. in obesity or pregnancy research, etc.

Animal pose estimation in the laboratory

The remarkable performance when using deep learning for human 2D & 3D pose estimation plus dense-representations made this large body of work ripe for exploring its utility in neuroscience (Figure 1d-f). In the past two years, deep learning tools for laboratory experiments have arrived (Figure 2a-d).

Many of the properties of DNNs were extremely appealing: remarkable and robust performance, relatively fast inference due to GPU hardware and efficient code due to modern packages like TensorFlow, and PyTorch (as reviewed in 

(51)). Furthermore, unlike for many previous algorithms neither body models, nor tedious manual tuning of parameters is required. Given the algorithms, the crucial ingredient for human pose estimation success was large-scale annotated data sets of humans with the locations of the bodyparts.

Here, we identify 5 key areas that were important for making DNN-based pose estimation tools useful for neuroscience laboratories, and review the progress in the last two years:

  1. Can DNNs be harnessed with small training datasets? Due to the nature of “small-scale" laboratory experiments, labeling or more frames is not a feasible approach (the typical human benchmark dataset sizes).

  2. The end-result must be as accurate as a human manually-applied labels (i.e. the gold standard), and computationally tractable (fast).

  3. The resulting networks should be robust to changes in experimental setups, and for long-term storage and re-analysis of video data, to video compression.

  4. Animals move in 3D, thus having efficient solutions for 3D pose estimation would be highly valuable, especially in the context of studying motor learning and control.

  5. Tracking multiple subjects and objects is important for many experiments studying social behaviors as well as for animal-object interactions.

1. Small training sets for lab-sized experiments

While the challenges discussed above for human pose estimation also apply for other animals, one important challenge for applying these methods to neuroscience was annotated data sets - could DNNs be harnessed for much smaller datasets, at sizes reasonable for typical labs? Thus, while it was clear that given enough annotated frames the same algorithms will be able to learn to track the body parts of any animal, there were feasibility concerns.

Human networks are typically trained on thousands of images, and nearly all the current state-of-the-art networks provide tailored solutions that utilize the skeleton structure during inference (19, 29). Thus, applying these tools to new datasets was not immediately straight-forward, and to create animal-specific networks one would need to potentially curate large datasets of the animal(s) they wanted to track. Additionally, researchers would need tailored DNNs to track their subjects (plus the ability to track unique objects, such as the corners of a box, or an implanted fiber).

Thus, one of the most important challenges is creating tailored DNNs that are robust and generalize well with little training data. One potential solution for making networks for animal pose estimation that could generalize well even with little data was to use transfer learning - the ability to take a network that has been trained on one task to perform another. The advantage is that these networks are pretrained on larger datasets (for different tasks where a lot of data is available like ImageNet), therefore they are effectively imbued with good image representations.


This is indeed what “DeepLabCut," the first tool to leverage the advances in human pose estimation for application to animals did (52). DeepLabCut was built on a subset of “DeeperCut" (28), which was an attractive option due to its use of ResNets, which are powerful for transfer learning (53, 40). Moreover transfer learning reduces training times (53, 40, 54), and there is a significant gain over using randomly-initialized networks in performance, especially for smaller datasets (40).

The major result from DeepLabCut was benchmarking on smaller datasets and finding that only a few hundred annotated images are enough to achieve excellent results for diverse pose estimation tasks like locomotion, reaching and trail-tracking in mice, egg-laying in flies and hunting in cheetahs, due to transfer learning (Figure 2f,g,h) (55, 56, 40). “DeepBehavior," which utilized different DNN-packages for various pose estimation problems, also illustrated the gain of transfer learning (54).

Figure 2: DNNs applied to animal pose estimation. a: Knee tracking during cycling adopted from (57). b: 3D limb estimates from (55). c: A Lichen Katydid tracked with DeepLabCut, courtesy of the authors. d: Fly with LEAP annotated body parts. The circles indicate the fraction for which predicted positions of the particular body part are closer to the ground truth than the radii on test images (adopted from (58)). e:

Density of t-SNE and then frequency-transformed freely moving fly body-part trajectories. Patches with higher probability indicate more common movements like different types of grooming behaviors (adopted from 

(58)); f. DeepLabCut requires little data to match human performance. Average error (RMSE) for several splits of training and test data vs. number of training images compared to RMSE of a human scorer. Each split is denoted by a cross, the average by a dot. For of the data, the algorithm achieves human level accuracy on the test set. As expected, test RMSE increases for fewer training images. Around 100 diverse frames are enough to provide high tracking performance (<5-pixel accuracy - adopted from (52)). g: Networks that perform better on ImageNet perform better for predicting 22 body parts on horses on within-domain (similar data distribution as training set, red) and out-of-domain data (novel horses, pink). The faint lines are individual splits. (adopted from (40)). h: Due to the convolutional network architecture, when trained on one mouse the network generalizes to detect body parts of three mice (adopted from (52)). i: 3D reaching kinematics of rat (adopted from (59)). j: 3D pose estimation on a cheetah for 2 example poses from 6 cameras as well es example 2D views (adopted from (56)). k: Pupil and pupil-size tracking (adopted from (60)).

2. Accuracy & speed

To be useful, pose estimation tools need to be as good as human annotation of frames (or tracking markers, another proxy for a human-applied label). Moreover, they need to be efficient (fast) for both offline analysis and online analysis.

Speed is often related to the depth of the network. Stacked-hourglass networks, which use iterative refinement (27, 61) and fewer layers, are fast. Two toolboxes, “LEAP"  (58) and “DeepPoseKit" (62) adopted variants of stacked-hourglass networks. LEAP allows the user to rapidly compute postures, and then perform unsupervised behavioral analysis (Figure 2d,e) (63). This is an attractive solution for real-time applications, but it is not quite as accurate. For various datasets, DeepPoseKit reports it is about three times as accurate as LEAP, yet similar to DeepLabCut (62). They also report about twice faster video processing compared to DeepLabCut and LEAP for batch-processing.

Deeper networks are slower, but often have more generalization ability (53). DeepLabCut was designed for generalization and therefore utilized deeper networks (ResNets) that are inherently slower than stacked-hourglass networks, yet DeepLabCut can match human accuracy in labeling (Figure 2f) (52). The speed has been shown to be compatible with online-feedback applications (64, 65, 55). The recently updated toolbox gives slightly lower accuracy, with twice the speed of prior versions of DeepLabCut due to the addition of optional MobileNetV2 backbones  (40). Overall, on GPU hardware all packages are fast reaching speeds of several hundred frames per seconds in offline modes.

Figure 3: The Behavioral Space in Neuroscience: new applications for deep learning-assisted analysis. This diagram depicts how pose estimation with non-invasive videography can benefit behavioral paradigms that span from “simple and robust" such as classical conditioning, to “complex, but repeatable" as in 3D reaching assays, to “naturalistic tasks" without trial structure, and that are more akin to real-world ’tasks’ that animals undertake. With new tools that allow for fast and accurate analysis of movement, these types of experiments become more feasible (without much human labor, as previously required).

3. Robustness

Neuroscience experiments based on video recordings produce large quantities of data and are collected over extensive periods of time. Thus, analysis pipelines should be robust to a myriad of perturbations: such as changes in setups (backgrounds, light sources, cameras, etc.), subject appearance (due to different animal strains), and compression algorithms (which allow storage of perceptually good videos with little memory demands (66)).

How can robustness be increased within the DNN? Both transfer learning (discussed above) and data augmentation strategies are popular and rapidly evolving approaches to increase robustness in DNNs (see review (67)

). Moreover, active learning approaches allow an experimenter to continuously build more robust and diverse datasets, for large scale projects by expanding the training set with images, where the network fails 

(52, 56, 58). So far, the toolboxes have been tested on data from the same distribution (i.e. by splitting frames from videos into test and training data), which is important for assessing the performance (58, 62, 52), but did not directly tested out-of-domain robustness.

Over the course of long-term experiments the background or even animal strain can change, which means having robust networks would be highly advantageous. We recently we tested the generalization ability of DeepLabCut with different network backbones for pose estimation. We find that pretraining on ImageNet strongly improves out-of-domain performance, and that better ImageNet performing network are more robust (Figure 2g) (40). There is still a gap to close in out-of-domain performance, however.

DeepLabCut is also robust to video compression, as compression by more than 1,000X only mildly affects accuracy (less than 1 pixel average error less) (55). The International Brain Lab (IBL) independently and synergistically showed that for tracking multiple body parts in a rodent decision making task, DeepLabCut is robust to video compression (68). Thus, in practice users can substantially compress videos, while retaining accurate posture information.

4. 3D animal pose estimation

Currently, there are several animal pose estimation toolboxes that explicitly support 2D and 3D key-point detection (56, 69, 70, 71). DeepLabCut uses 2D pose estimation to train a single camera-invariant 2D network (or multiple 2D networks) that is then used to perform traditional triangulation to extract 3D key points (Figure 2i, j;  (56, 59)). A pipeline built on DeepLabCut called “Anipose" allows for 3D reconstruction from multiple cameras using a wider variety of methods (69). “DeepFly3D" (70) uses the network architecture from Newell et al (27) and then adds elegant tools to compute an accurate 3D estimate of Drosophila melanogaster by using the fly itself vs. standard calibration boards. Zhang et al. use epipolar geometry to train across views and thereby improve 3D pose estimation for mice, dogs, and monkeys (71).

5. Multi-animal & object tracking

Many experiments in neuroscience require measuring interactions of multiple animals or interactions with objects. Having the ability to both flexibly track user-defined objects or multiple animals therefore is highly desirable. There are many pre-deep learning algorithms that allow tracking of objects (one such modern example called “Tracktor" also nicely summarizes this body of work (72)). Recently researchers have also applied deep learning to this problem. For example, the impressive “idTracker:ai" (73) allows for users to track a hundred individual, unmarked animals. Araac et al. used YOLO, a popular and fast object localization network, for tracking two mice during a social behavior (54). These, and others, can then be combined with pose estimation packages for estimating the pose of multiple animals. Currently, two paths are possible: one is to apply pose estimation algorithms after tracking individuals (for which any package could be used); or, two, extract multiple detections for each part on each animal (52, 74) and link them using part affinity fields (19), pairwise predictions (28), or geometrical constraints (74), plus combinatorics.

The impact on experimental neuroscience

In the short time period these tools have become available there has been rather wide adoption by the neuroscience and ethology community. From more common model organisms such as flies (52, 58, 62, 70), mice (52, 58, 54), zebrafish (65, 56), and tailored human analysis such as knee movement quantification during cycling (Figure 2a) (57), postural analysis during underwater running (75) to more exotic applications such as social behavior in bats (76). Other examples include benchmarking thermal constraints with optogenetics (77), Drosophila leg movement analysis (78), 3D rat reaching (Figure 2i) (59) and pupillometry (Figure 2k)  (60). Also inanimate objects that animals interact with can be tracked, and it has indeed also been used to track metal beads when subjected to a high voltage (79), and magic tricks (i.e. coins and the magician) (80).

Pose estimation is just the beginning; the next steps involve careful analysis of kinematics, building detailed, multi-scale ethograms of behaviors, new modeling techniques to understand large-scale brain activity and behaviors across a multitude of timescales, and beyond. We envision three branches where powerful feature tracking and extensions will be useful: motor control studies (often involving complex motor actions), naturalistic behaviors in the lab and in the wild, and better quantification of robust and seemingly simple “non-motor" tasks (Figure 3).

Motor control & kinematics

Often in neuroscience-minded motor control studies end-effector proxies (such as manipulandums or joysticks) are used to measure the motor behavior of subjects or animals. There are relatively few marker-tracking based movement neuroscience studies, in which many degrees of freedom were measured alongside neural activity, with notable exceptions like 

(81, 82). The ease with which kinematics of limbs and digits can now be quantified (52, 59, 78, 54) should greatly simplify such studies in the future. We expect many more highly detailed kinematic studies will emerge that utilize DNN-based analyses, especially for freely moving animals, for small and aquatic animals that cannot be marked, and for motor control studies that can leverage large-scale recordings and behavioral monitoring.

Natural behaviors & ethologically relevant features

There is a trend in motor neuroscience towards natural behaviors; i.e. less constrained tasks, everyday-skills, and even “in the wild" studies (83). For instance, DeepLabCut was used for 3D pose estimation in hunting cheetah’s captured via multiple Go-Pro cameras (56). Another “in the wild example" is given by a recent study by Chambers et al. (84), who revisited the classic question of how people synchronize their walking, but with a modern twist by using videos from YouTube and analysis with OpenPose (19). Consistent with studies performed in the laboratory, they found a tendency for pairs of people to either walk in or exactly out of phase (84).

How else can DNNs help? Specialized body parts often play a key role in ethologically relevant behaviors. For instance, ants use their antenna to follow odor trails (85), while moles use their snouts for sampling bilateral nasal cues to localize odorants (86). To accurately measure such behaviors, highly accurate feature-detectors of often tiny, highly dexterous bodyparts are needed. This is a situation where deep learning algorithms can excel. Pose estimation algorithms can not only be used to detect the complete "pose", but due to their flexibility they are extremely useful to track ethologically relevant body parts in challenging situations; incidentally DeepLabCut was created, in part, to accurately track the snouts of mice following odor trails that were printed onto a treadmill (52). There are of course many other specialized body parts that are hard to track: like whiskers, bee-stingers, jellyfish tentacles, or octopus arms, and we believe that studying these beautiful systems in more natural and ethologically relevant environments has now gotten easier.

Revisiting classic tasks

Measuring behavior is already impacting “classic" decision-making paradigms. For example, several groups could show broad movement encoding across the brain during decision-making tasks by carefully quantifying behavior (87, 88). Moreover, large scale efforts to use these “simple" yet robust trial-based behaviors across labs and brain areas are leveraging deep learning, and comparing their utility compared to classical behavior-monitoring approaches. For example, the IBL has surmised that DeepLabCut could replace traditional methods used for eye, paw and lick detection (68). We believe that detailed behavioral analysis will impact many paradigms, which were historically not considered “motor" studies, as now it is much easier to measure movement.

Outlook & Conclusions

The field of 2D, 3D, and dense pose estimation will continue to evolve. For example, with respect to handling occlusions and robustness to out-of-domain data. Perhaps larger and more balanced datasets will be created to better span the behavioral space, more temporal information will be utilized when training networks or analyzing data, and new algorithmic solutions will be found.

Will we always need training data? A hot topic in object recognition is training from very few examples (one-shot or zero-shot learning) (89). Can this be achieved in pose estimation? Perhaps as new architectures and training regimes come to fruition this could be possible. Alternatively, specialized networks could now be built that leverage large datasets of specific animals. It is hard to envision a universal “animal pose detector" network (for object recognition this is possible) as animals have highly diverse body plans and experimentalists often have extremely different needs. Currently many individual labs create their own specialized networks, but we plan to create shareable networks for specific animals (much like the specific networks, i.e. hand network in OpenPose (90)). For example, many open field experiments could benefit from robust and easy-to-use DNNs for video analysis across similar body points of the mouse. Indeed, efforts are underway to create networks where one can simply analyze their data without training, and we hope the community will join these efforts. Nothing improves DNNs more than more training data. These efforts, together with making code open source will furthermore contribute to the reproducibility of science and make these tools broadly accessible.

In summary, we aimed to review the progress in computer vision for human pose estimation, how it influenced animal pose estimation, and how neuroscience laboratories can leverage these tools for better quantification of behavior. Taken together, the tremendous advance of computer vision has provided tools that are practical for the use in the laboratory, and they will only get better. They can be as accurate as human-labeling (or marker-based tracking), and are fast enough for closed-loop experiments, which is key for understanding the link between neural systems and behavior. We expect that in-light of shared, easy-to-use tools and additional deep learning advances, there will be thrilling and unforeseen advances in real-world neuroscience.

Acknowledgments:

We thank Eldar Insafutdinov, Alex Gomez-Marin, and the M. Mathis Lab for comments on the manuscript, and Julia Kuhl for illustrations. Funding was provided by the Rowland Institute at Harvard University (M.W.M.) and National Institute of Health U19MH114823 (A.M). The authors declare no conflicts of interest.

References

  • Uexküll (1956) J. Baron Uexküll. Streifzüge durch die Umwelten von Tieren und Menschen Ein Bilderbuch unsichtbarer Welten. Springer, Berlin, Heidelberg, 1956.
  • Tinbergen (1963) Nikolaas Tinbergen. On aims and methods of Ethology. Zeitschrift für Tierpsychologie, 20(4):410–433, 1963.
  • Bernstein (1967) Nikolai A. Bernstein. The co-ordination and regulation of movements, volume 1. Oxford, New York, Pergamon Press, 1967.
  • Tinbergen (1973) Nikolaas Tinbergen. Nobel lecture. the nobel prize in physiology or medicine 1973, 1973.
  • Schaefer and Claridge-Chang (2012) Andreas T Schaefer and Adam Claridge-Chang. The surveillance state of behavioral automation. In Current Opinion in Neurobiology, 2012.
  • Anderson and Perona (2014) David J Anderson and Pietro Perona. Toward a science of computational ethology. Neuron, 84(1):18–31, 2014.
  • Dell et al. (2014) Anthony I Dell, John A Bender, Kristin Branson, Iain D Couzin, Gonzalo G de Polavieja, Lucas PJJ Noldus, Alfonso Pérez-Escudero, Pietro Perona, Andrew D Straw, Martin Wikelski, et al. Automated image-based tracking and its application in ecology. Trends in ecology & evolution, 29(7):417–428, 2014.
  • Egnor and Branson (2016) Roian Egnor and Kristin Branson. Computational analysis of behavior. Annual review of neuroscience, 39:217–236, 2016.
  • Camomilla et al. (2018) Valentina Camomilla, Elena Bergamini, Silvia Fantozzi, and Giuseppe Vannozzi. Trends supporting the in-field use of wearable inertial sensors for sport performance evaluation: A systematic review. Sensors, 18(3):873, 2018.
  • Gomez-Marin et al. (2014) Alex Gomez-Marin, Joseph J Paton, Adam R Kampff, Rui M Costa, and Zachary F Mainen. Big behavioral data: psychology, ethology and the foundations of neuroscience. Nature neuroscience, 17(11):1455, 2014.
  • Poppe (2007) Ronald Poppe. Vision-based human motion analysis: An overview. Computer Vision and Image Understanding, 108(1):4 – 18, 2007. ISSN 1077-3142. doi: https://doi.org/10.1016/j.cviu.2006.10.016. Special Issue on Vision for Human-Computer Interaction.
  • Litjens et al. (2017) Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. Medical image analysis, 42:60–88, 2017.
  • Hogg (1983) David Hogg. Model-based vision: a program to see a walking person. Image and Vision Computing, 1(1):5 – 20, 1983. ISSN 0262-8856. doi: https://doi.org/10.1016/0262-8856(83)90003-3.
  • Wren et al. (1997) Christopher Richard Wren, Ali Azarbayejani, Trevor Darrell, and Alex Paul Pentland. Pfinder: Real-time tracking of the human body. IEEE Transactions on pattern analysis and machine intelligence, 19(7):780–785, 1997.
  • Cremers et al. (2007) Daniel Cremers, Mikael Rousson, and Rachid Deriche. A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape. International journal of computer vision, 72(2):195–215, 2007.
  • Serre (2019) Thomas Serre. Deep learning: The good, the bad, and the ugly. Annual Review of Vision Science, 5, 2019. doi: https://doi.org/10.1146/annurev-vision-091718-014951.
  • Alom et al. (2018) Md Zahangir Alom, Tarek M Taha, Christopher Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Brian C Van Esesn, Abdul A S Awwal, and Vijayan K Asari. The history began from alexnet: a comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164, 2018.
  • Schmidhuber (2015) Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85 – 117, 2015. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2014.09.003.
  • Cao et al. (2017) Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2017.
  • Mehta et al. (2016) Dushyant Mehta, Helge Rhodin, Dan Casas, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation using transfer learning and improved CNN supervision. CoRR, abs/1611.09813, 2016.
  • Güler et al. (2018) Rıza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7297–7306, 2018.
  • Zuffi et al. (2019) Silvia Zuffi, Angjoo Kanazawa, Tanja Berger-Wolf, and Michael Black. Three-d safari: Learning to estimate zebra pose, shape, and texture from images "in the wild". In ICCV. IEEE Computer Society, 08 2019.
  • Toshev and Szegedy (2013) Alexander Toshev and Christian Szegedy. Deeppose: Human pose estimation via deep neural networks. CoRR, abs/1312.4659, 2013.
  • Jain et al. (2014) Arjun Jain, Jonathan Tompson, Yann LeCun, and Christoph Bregler. Modeep: A deep learning framework using motion features for human pose estimation. CoRR, abs/1409.7963, 2014.
  • Tompson et al. (2014) Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1799–1807. Curran Associates, Inc., 2014.
  • Tompson et al. (2015) Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object localization using convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • Newell et al. (2016) Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.
  • Insafutdinov et al. (2016) Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. DeeperCut: A deeper, stronger, and faster multi-person pose estimation model. In European Conference on Computer Vision, pages 34–50. Springer, 2016.
  • Insafutdinov et al. (2017) Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres, and Bernt Schiele. Arttrack: Articulated multi-person tracking in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
  • Kreiss et al. (2019) Sven Kreiss, Lorenzo Bertoni, and Alexandre Alahi. Pifpaf: Composite fields for human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11977–11986, 2019.
  • Sun et al. (2019) Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. arXiv preprint arXiv:1902.09212, 2019.
  • Dang et al. (2019) Q. Dang, J. Yin, B. Wang, and W. Zheng. Deep learning based 2d human pose estimation: A survey. Tsinghua Science and Technology, 24(6):663–676, Dec 2019. ISSN 1007-0214. doi: 10.26599/TST.2018.9010100.
  • Sarafianos et al. (2016) Nikolaos Sarafianos, Bogdan Boteanu, Bogdan Ionescu, and Ioannis A Kakadiaris. 3d human pose estimation: A review of the literature and analysis of covariates. Computer Vision and Image Understanding, 152:1–20, 2016.
  • Martinez et al. (2017) Julieta Martinez, Rayat Hossain, Javier Romero, and James J Little. A simple yet effective baseline for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2640–2649, 2017.
  • Andriluka et al. (2018) Mykhaylo Andriluka, Umar Iqbal, Eldar Insafutdinov, Leonid Pishchulin, Anton Milan, Juergen Gall, and Bernt Schiele. Posetrack: A benchmark for human pose estimation and tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5167–5176, 2018.
  • Golda et al. (2019) Thomas Golda, Tobias Kalb, Arne Schumann, and Jürgen Beyerer. Human pose estimation for real-world crowded scenarios. CoRR, abs/1907.06922, 2019.
  • Sinz et al. (2019) Fabian H. Sinz, Xaq Pitkow, Jacob Reimer, Matthias Bethge, and Andreas S. Tolias. Engineering a less artificial intelligence. Neuron, 103(6):967 – 979, 2019. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2019.08.034.
  • Seethapathi et al. (2019) Nidhi Seethapathi, Shaofei Wang, Rachit Saluja, Gunnar Blohm, and Konrad P. Körding. Movement science needs different pose tracking algorithms. CoRR, abs/1907.10226, 2019.
  • Rhodin et al. (2018) Helge Rhodin, Jörg Spörri, Isinsu Katircioglu, Victor Constantin, Frédéric Meyer, Erich Müller, Mathieu Salzmann, and Pascal Fua. Learning monocular 3d human pose estimation from multi-view images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8437–8446, 2018.
  • Mathis et al. (2019a) Alexander Mathis, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, and Mackenzie W Mathis. Pretraining boosts out-of-domain robustness for pose estimation. arXiv preprint arXiv:1909.11229, 2019a.
  • Michaelis et al. (2019) Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, and Wieland Brendel. Benchmarking robustness in object detection: Autonomous driving when winter is coming. CoRR, abs/1907.07484, 2019.
  • Shotton et al. (2012) Jamie Shotton, Ross B. Girshick, Andrew W. Fitzgibbon, Toby Sharp, Mat Cook, Mark Finocchio, Richard Moore, Pushmeet Kohli, Antonio Criminisi, Alex Kipman, and Andrew Blake. Efficient human pose estimation from single depth images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:2821–2840, 2012.
  • Obdržálek et al. (2012) Š. Obdržálek, G. Kurillo, F. Ofli, R. Bajcsy, E. Seto, H. Jimison, and M. Pavel. Accuracy and robustness of kinect pose estimation in the context of coaching of elderly population. In 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 1188–1193, Aug 2012. doi: 10.1109/EMBC.2012.6346149.
  • Wiltschko et al. (2015) Alexander B Wiltschko, Matthew J Johnson, Giuliano Iurilli, Ralph E Peterson, Jesse M Katon, Stan L Pashkovski, Victoria E Abraira, Ryan P Adams, and Sandeep Robert Datta. Mapping sub-second structure in mouse behavior. Neuron, 88(6):1121–1135, 2015.
  • Hong et al. (2015) Weizhe Hong, Ann Kennedy, Xavier P Burgos-Artizzu, Moriel Zelikowsky, Santiago G Navonne, Pietro Perona, and David J Anderson. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning. Proceedings of the National Academy of Sciences, 112(38):E5351–E5360, 2015.
  • Pavlakos et al. (2019) Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019.
  • Kanazawa et al. (2018) Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In Computer Vision and Pattern Regognition (CVPR), 2018.
  • Zuffi et al. (2016) Silvia Zuffi, Angjoo Kanazawa, David Jacobs, and Michael Black. 3d menagerie: Modeling the 3d shape and pose of animals. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 11 2016.
  • Biggs et al. (2018) Benjamin Biggs, Thomas Roddick, Andrew Fitzgibbon, and Roberto Cipolla. Creatures great and smal: Recovering the shape and motion of animals from video. In Asian Conference on Computer Vision, pages 3–19. Springer, 2018.
  • Zuffi et al. (2018) Silvia Zuffi, Angjoo Kanazawa, and Michael J. Black. Lions and tigers and bears: Capturing non-rigid, 3d, articulated shape from images. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition), June 2018.
  • Nguyen et al. (2019) Giang Nguyen, Stefan Dlugolinsky, Martin Bobák, Viet Tran, Álvaro López García, Ignacio Heredia, Peter Malík, and Ladislav Hluchý. Machine learning and deep learning frameworks and libraries for large-scale data mining: a survey. Artificial Intelligence Review, 52(1):77–124, Jun 2019. ISSN 1573-7462. doi: 10.1007/s10462-018-09679-z.
  • Mathis et al. (2018) Alexander Mathis, Pranav Mamidanna, Kevin M Cury, Taiga Abe, Venkatesh N Murthy, Mackenzie Weygandt Mathis, and Matthias Bethge. Deeplabcut: markerless pose estimation of user-defined body parts with deep learning. Technical report, Nature Publishing Group, 2018.
  • Kornblith et al. (2019) Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2661–2671, 2019.
  • Arac et al. (2019) Ahmet Arac, Pingping Zhao, Bruce H Dobkin, S Thomas Carmichael, and Peyman Golshani. Deepbehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Frontiers in systems neuroscience, 13:20, 2019.
  • Mathis and Warren (2018) Alexander Mathis and Richard A. Warren. On the inference speed and video-compression robustness of deeplabcut. bioRxiv, 2018. doi: 10.1101/457242.
  • Nath et al. (2019) Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, and Mackenzie W Mathis. Using deeplabcut for 3d markerless pose estimation across species and behaviors. Nature protocols, 2019.
  • Kaplan et al. (2019) Oral Kaplan, Goshiro Yamamoto, Takafumi Taketomi, Alexander Plopski, and Hirokazu Kato. Video-based visualization of knee movement in cycling for quantitative and qualitative monitoring. In 2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR), pages 1–5. IEEE, 2019.
  • Pereira et al. (2019) Talmo D Pereira, Diego E Aldarondo, Lindsay Willmore, Mikhail Kislin, Samuel S-H Wang, Mala Murthy, and Joshua W Shaevitz. Fast animal pose estimation using deep neural networks. Nature methods, 16(1):117, 2019.
  • Bova et al. (2019) A Bova, K Kernodle, K Mulligan, and D Leventhal. Automated rat single-pellet reaching with 3-dimensional reconstruction of paw and digit trajectories. Journal of visualized experiments: JoVE, 2019.
  • Sriram et al. (2019) Balaji Sriram, Lillian Li, Alberto Cruz-Martín, and Anirvan Ghosh. A sparse probabilistic code underlies the limits of behavioral discrimination. Cerebral Cortex, 2019.
  • Badrinarayanan et al. (2017) V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481–2495, Dec 2017. ISSN 0162-8828. doi: 10.1109/TPAMI.2016.2644615.
  • Graving et al. (2019) Jacob M Graving, Daniel Chae, Hemal Naik, Liang Li, Benjamin Koger, Blair R Costelloe, and Iain D Couzin. Fast and robust animal pose estimation. bioRxiv, page 620245, 2019.
  • Berman et al. (2014) Gordon J. Berman, Daniel M. Choi, William Bialek, and Joshua W. Shaevitz. Mapping the stereotyped behaviour of freely moving fruit flies. Journal of The Royal Society Interface, 11(99), 2014. ISSN 1742-5689. doi: 10.1098/rsif.2014.0672.
  • Forys et al. (2018) Brandon Forys, Dongsheng Xiao, Pankaj Gupta, Jamie D Boyd, and Timothy H Murphy. Real-time markerless video tracking of body parts in mice using deep neural networks. bioRxiv, page 482349, 2018.
  • Štih et al. (2019) Vilim Štih, Luigi Petrucco, Andreas M Kist, and Ruben Portugues. Stytra: An open-source, integrated system for stimulation, tracking and closed-loop behavioral experiments. PLoS computational biology, 15(4):e1006699, 2019.
  • Wiegand et al. (2003) Thomas Wiegand, Gary J Sullivan, Gisle Bjontegaard, and Ajay Luthra. Overview of the H. 264/AVC video coding standard. IEEE Transactions on circuits and systems for video technology, 13(7):560–576, 2003.
  • Shorten and Khoshgoftaar (2019) Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):60, Jul 2019. ISSN 2196-1115. doi: 10.1186/s40537-019-0197-0.
  • Meijer et al. (2019) Guido Meijer, Michael Schartner, Gaëlle Chapuis, and Karel Svoboda. Video hardware and software for the international brain laboratory. International Brain Laboratory, 2019.
  • Karashchuk (2019) Pierre Karashchuk. lambdaloop/anipose: v0.5.0, August 2019.
  • Günel et al. (2019) Semih Günel, Helge Rhodin, Daniel Morales, João Campagnolo, Pavan Ramdya, and Pascal Fua. Deepfly3d: A deep learning-based approach for 3d limb and appendage tracking in tethered, adult drosophila. bioRxiv, 2019. doi: 10.1101/640375.
  • Zhang and Park (2019) Yilun Zhang and Hyun Soo Park. Multiview supervision by registration. arXiv preprint arXiv:1811.11251v2, 2019.
  • Sridhar et al. (2019) Vivek Hari Sridhar, Dominique G. Roche, and Simon Gingins. Tracktor: Image-based automated tracking of animal movement and behaviour. Methods in Ecology and Evolution, 10(6):815–820, 2019. doi: 10.1111/2041-210X.13166.
  • Romero-Ferrero et al. (2019) Francisco Romero-Ferrero, Mattia G Bergomi, Robert C Hinz, Francisco JH Heras, and Gonzalo G de Polavieja. idtracker. ai: tracking all individuals in small or large collectives of unmarked animals. Nature methods, 16(2):179, 2019.
  • Jiang et al. (2019) Zheheng Jiang, Zhihua Liu, Long Chen, Lei Tong, Xiangrong Zhang, Xiangyuan Lan, Danny Crookes, Ming-Hsuan Yang, and Huiyu Zhou. Detection and tracking of multiple mice using part proposal networks. arXiv preprint arXiv:1906.02831, 2019.
  • Cronin et al. (2019) Neil J Cronin, Timo Rantalainen, Juha P Ahtiainen, Esa Hynynen, and Ben Waller. Markerless 2d kinematic analysis of underwater running: A deep learning approach. Journal of biomechanics, 2019.
  • Zhang and Yartsev (2019) Wujie Zhang and Michael M Yartsev. Correlated neural activity across the brains of socially interacting bats. Cell, 2019.
  • Owen et al. (2019) Scott F Owen, Max H Liu, and Anatol C Kreitzer. Thermal constraints on in vivo optogenetic manipulations. Nature neuroscience, 2019.
  • Azevedo et al. (2019) Anthony W Azevedo, Pralaksha Gurung, Lalanti Venkatasubramanian, Richard Mann, and John C Tuthill. A size principle for leg motor control in drosophila. bioRxiv, page 730218, 2019.
  • De Bari et al. (2019) Benjamin De Bari, James A Dixon, Bruce A Kay, and Dilip Kondepudi. Oscillatory dynamics of an electrically driven dissipative structure. PloS one, 14(5):e0217305, 2019.
  • Zaghi-Lara et al. (2019) Regina Zaghi-Lara, Miguel Ángel Gea, Jordi Camí, Luis M Martínez, and Alex Gomez-Marin. Playing magic tricks to deep neural networks untangles human deception. arXiv preprint arXiv:1908.07446, 2019.
  • Vargas-Irwin et al. (2010) Carlos E Vargas-Irwin, Gregory Shakhnarovich, Payman Yadollahpour, John MK Mislow, Michael J Black, and John P Donoghue. Decoding complete reach and grasp actions from local primary motor cortex populations. Journal of neuroscience, 30(29):9659–9669, 2010.
  • Schaffelhofer et al. (2015) Stefan Schaffelhofer, Andres Agudelo-Toro, and Hansjörg Scherberger. Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices. Journal of Neuroscience, 35(3):1068–1081, 2015.
  • Mathis et al. (2019b) Alexander Mathis, Andrea R Pack, Rodrigo S Maeda, and Samuel David McDougle. Highlights from the 29th annual meeting of the society for the neural control of movement, 2019b.
  • Chambers et al. (2019) Claire Chambers, Gaiqing Kong, Kunlin Wei, and Konrad Kording. Pose estimates from online videos show that side-by-side walkers synchronize movement under naturalistic conditions. PloS one, 14(6):e0217861, 2019.
  • Draft et al. (2018) Ryan W Draft, Matthew R McGill, Vikrant Kapoor, and Venkatesh N Murthy. Carpenter ants use diverse antennae sampling strategies to track odor trails. Journal of Experimental Biology, 221(22):jeb185124, 2018.
  • Catania (2013) Kenneth C Catania. Stereo and serial sniffing guide navigation to an odour source in a mammal. Nature communications, 4:1441, 2013.
  • Stringer et al. (2019) Carsen Stringer, Marius Pachitariu, Nicholas Steinmetz, Charu Bai Reddy, Matteo Carandini, and Kenneth D. Harris. Spontaneous behaviors drive multidimensional, brainwide activity. Science, 364(6437), 2019. ISSN 0036-8075. doi: 10.1126/science.aav7893.
  • Musall et al. (2019) Simon Musall, Matthew T. Kaufman, Ashley L. Juavinett, Steven Gluf, and Anne K. Churchland. Single-trial neural dynamics are dominated by richly varied movements. Nat. Neurosci., 2019. doi: https://dx.doi.org/xxx.
  • Xian et al. (2019) Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9):2251–2265, Sep. 2019. ISSN 0162-8828. doi: 10.1109/TPAMI.2018.2857768.
  • Simon et al. (2017) Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. Hand keypoint detection in single images using multiview bootstrapping. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.

Highlighted References:

[1*] A survey on deep learning in medical image analysis (12) Comprehensive review of all deep learning algorithms used in medical image analysis as of 2017, as well as a discussion of most successful approaches, together with [2**] a fantastic introduction for newcomers to the field.

[2**] Deep learning: The good, the bad, and the ugly (16) Excellent review of deep learning progress including a detailed description of recent successes as well as limitations of computer vision algorithms.

[3**] Realtime multi-person 2d pose estimation using part affinity fields (19) OpenPose was the first real-time multi-person system to jointly detect human body parts by using part affinity fields, a great way to link body part proposals across individuals. The toolbox is well maintained and now boasts body, hand, facial, and foot keypoints (in total 135 keypoints) as well as 3D support.

[4*] Dense-pose: Dense human pose estimation in the wild.(21) Using a large dataset of humans (50K), they build dense correspondences between RGB images and human bodies. They apply this to human “in the wild," and build tools for efficiently dealing with occlusions. It is highly accurate and runs up to 25 frames per second.

[5**] Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images “In the Wild" (22) Zuffi et al. push dense pose estimations by using a new SMALST model for capturing zebras pose, soft-tissue shape, and even texture “in the wild." This is a difficult challenge as zebras are designed to blend into the background in the safari. This paper makes significant improvements on accuracy and realism, and builds on a line of elegant work from these authors (48, 50)

[6**] DeeperCut: A deeper, stronger, and faster multi-person pose estimation model (28) DeeperCut is a highly accurate algorithm for multi-human pose estimation due to improved deep learning based body part detectors, and image-conditioned pairwise terms to predict the location of body parts based on the location of other body parts. These terms are then used to find accurate poses of individuals via graph cutting. In ArtTrack (29) the work was extended to fast multi-human pose estimation in videos.

[7*] Recovering the shape and motion of animals from video (49) The authors combine multiple methods in order to efficiently fit 3D shape to multiple quadrupeds from camels to bears. They also provides a novel dataset of joint annotations and silhouette segmentation for eleven animal videos.

[8**] DeepLabCut: markerless pose estimation of user-defined body parts with deep learning (52) DeepLabCut was the first deep learning toolbox for animal pose estimation. The key advance was to benchmark a subset of the feature detectors in DeeperCut (29). This paper showed nearly human-level performance with only 50-200 images. It benchmarked flies moving in a 3D chamber, hand articulations and open-field behavior in mice, and provided open-source tools for creating new datasets and data loaders to train deep neural networks, and post-hoc analysis. Subsequent work has improved accuracy, speed, and introduced more network variants into the Python package (55, 56, 40).

[9*] DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data (54) Arac et al. use three different DNN-packages (OpenPose (19), YOLO and Tensorbox) for analyzing 3D analysis of pellet reaching, three-chamber test, social behavior in mice, and 3D human kinematics analysis for clinical motor function assessment with OpenPose (19). They also provide MATLAB scripts for additional analysis (after pose estimation).

[10*] Using DeepLabCut for 3D markerless pose estimation across species and behaviors (56). A Nature Protocols user-guide to DeepLabCut2.0, with 3D pose estimation of hunting cheetahs and improved network performance. The toolbox is provided as a Python package with graphical user interfaces for labeling, active-learning-based network refinement, together with Jupyter Notebooks that can be run on cloud resources such as Google Colaboratory (for free).

[11*] Fast animal pose estimation using deep neural networks (58). LEAP (LEAP estimates animal pose), a DNN method for predicting the positions of animal body parts. This framework consists of a graphical interface for labeling of body parts and training the network. Training and inference times are fast due to the lightweight architecture. The authors also analyzed insect gait based on unsupervised behavioral methods (63), which they directly applied to the posture, rather than to compressed image features.