From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data

07/20/2018 ∙ by Daniel Coelho de Castro, et al. ∙ Microsoft Imperial College London 8

Current face recognition systems robustly recognize identities across a wide variety of imaging conditions. In these systems recognition is performed via classification into known identities obtained from supervised identity annotations. There are two problems with this current paradigm: (1) current systems are unable to benefit from unlabelled data which may be available in large quantities; and (2) current systems equate successful recognition with labelling a given input image. Humans, on the other hand, regularly perform identification of individuals completely unsupervised, recognising the identity of someone they have seen before even without being able to name that individual. How can we go beyond the current classification paradigm towards a more human understanding of identities? We propose an integrated Bayesian model that coherently reasons about the observed images, identities, partial knowledge about names, and the situational context of each observation. While our model achieves good recognition performance against known identities, it can also discover new identities from unsupervised data and learns to associate identities with different contexts depending on which identities tend to be observed together. In addition, the proposed semi-supervised component is able to handle not only acquaintances, whose names are known, but also unlabelled familiar faces and complete strangers in a unified framework.



There are no comments yet.


page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For the following discussion, we decompose the usual face identification task into two sub-problems: recognition and tagging. Here we understand recognition as the unsupervised task of matching an observed face to a cluster of previously seen faces with similar appearance (disregarding variations in pose, illumination etc.), which we refer to as an identity. Humans routinely operate at this level of abstraction to recognise familiar faces: even when people’s names are not known, we can still tell them apart. Tagging, on the other hand, refers to putting names to faces, i.e. associating string literals to known identities.

Humans tend to create an inductive mental model of facial appearance for each person we meet, which we then query at new encounters to be able to recognise them. This is opposed to a transductive approach, attempting to match faces to specific instances from a memorised gallery of past face observations—which is how identification systems are often implemented [17].

An alternative way to represent faces, aligned with our inductive recognition, is via generative face models, which explicitly separate latent identity content, tied across all pictures of a same individual, from nuisance factors such as pose, expression and illumination [16, 22, 19]. While mostly limited to linear projections from pixel space (or mixtures thereof), the probabilistic framework applied in these works allowed tackling a variety of face recognition tasks, such as closed- and open-set identification, verification and clustering.

A further important aspect of social interactions is that, as an individual continues to observe faces every day, they encounter some people much more often than others, and the total number of distinct identities ever met tends to increase virtually without bounds. Additionally, we argue that human face recognition does not happen in an isolated environment, but situational contexts (e.g. ‘home’, ‘work’, ‘gym’) constitute strong cues for the groups of people a person expects to meet (Fig. (b)b).

With regards to tagging, in daily life we very rarely obtain named face observations: acquaintances normally introduce themselves only once, and not repeatedly whenever they are in our field of view. In other words, humans are naturally capable of semi-supervised learning, generalising sparse name annotations to all observations of the corresponding individuals, while additionally reconciling naming conflicts due to noise and uncertainty.

Figure 1: Face recognition settings. Points represent face observations and boxes are name labels.

In contrast, standard computational face identification is fully supervised (see Fig. (a)a), relying on vast labelled databases of high-quality images [1]. Although many supervised methods achieve astonishing accuracy on challenging benchmarks (e.g. [27, 26]) and are successfully employed in practical biometric applications, this setting has arguably limited analogy to human social experience.

Expanding on the generative perspective, we introduce a unified Bayesian model which reflects all the above considerations on identity distributions, context-awareness and labelling (Fig. (b)b). Our nonparametric identity model effectively represents an unbounded population of identities, while taking contextual co-occurrence relations into account and exploiting modern deep face representations to overcome limitations of previous linear generative models. Our main contributions in this work are twofold:

  1. We propose an unsupervised face recognition model which can explicitly reason about people it has never seen; and

  2. We attach to it a novel robust label model enabling it to predict names by learning from both named and unnamed faces.

Related Work

Other face recognition methods (even those formulated in a Bayesian framework) [33, 34, 9, 28, 18]

, often limit themselves to point estimates of parameters and predictions, occasionally including ad-hoc confidence metrics. A distinct advantage of our approach is that it is probabilistic end-to-end, and thus naturally provides predictions with principled, quantifiable uncertainties. Moreover, we employ modern Bayesian modelling tools—namely hierarchical nonparametrics—which enable dynamically adapting model complexity while faithfully reflecting the real-world assumptions laid out above.

Secondly, although automatic face tagging is a very common task, each problem setting can impose wildly different assumptions and constraints. Typical application domains involve the annotation of personal photo galleries [33, 34, 3, 13], multimedia (e.g. TV) [28, 18] or security/surveillance [17]. Our work focuses on egocentric human-like face recognition, a setting which seems largely unexplored, as most of the work using first-person footage appears to revolve around other tasks like object and activity recognition, face detection, and tracking [4]. As we explained previously, the dynamic, online nature of first-person social experience brings a number of specific modelling challenges for face recognition.

Finally, while there is substantial prior work on using contexts to assist face recognition, we emphasize that much (perhaps most) of it is effectively complementary to our unified framework. Notions of global context such as timestamp, geolocation and image background [31, 34, 3, 9] can readily be used to inform our current context model (Section 2.1). In addition, we can naturally augment the proposed face model (Section 2.3) to leverage further individual context features, e.g. clothing and speech [34, 3, 28, 18]. Integration of these additional factors opens exciting avenues for future research.

2 A Model of Identities

In this section, we describe in isolation each of the building blocks of the proposed approach to facial identity recognition: the context model, the identity model and the face model. We assume data is collected in the form of camera frames (either photographs or a video stills), numbered to , and faces are cropped with some face detection system and grouped by frame number indicators, . The diagram in Fig. 2 illustrates the full proposed graphical model, including the label model detailed in Section 3.




Figure 2: Overview of the proposed generative model, encompassing the context model, identity model, face model and label model

. Unfilled nodes represent latent variables, shaded nodes are observed, the half-shaded node is observed only for a subset of the indices and uncircled nodes denote fixed hyperparameters.


are the global and context-wise identity probabilities,

denotes the context probabilities, are the frame-wise context labels, indexed by the frame numbers , are the latent identity indicators, are the face observations and are the respective name annotations, are the parameters of the face model and are the identities’ name labels. See text for descriptions of the remaining symbols.

2.1 Context Model

In our identity recognition scenario, we imagine the user moving between contexts throughout the day (e.g. home–work–gym…). Since humans naturally use situational context as a strong prior on the groups of people we expect to encounter in each situation, we incorporate context-awareness in our model of identities to mimic human-like face recognition.

The context model we propose involves a categorical variable

for each observation, where is some fixed number of distinct contexts.111See footnote 2. Crucially, we assume that all observations in frame , , share the same context, (i.e. ).

We define the identity indicators to be independent given the context of the corresponding frames (see Section 2.2, below). However, since the contexts are tied by frame, marginalising over the contexts captures identity co-occurrence relations. In turn, these allow the model to make more confident predictions about people who tend to be seen together in the same environment.

This formalisation of contexts as discrete semantic labels is closely related to the place recognition model in [31], used there to disambiguate predictions for object detection. It has also been demonstrated that explicit incorporation of a context variable can greatly improve clustering with mixture models [20].

Finally, we assume the context indicators are independently distributed according to probabilities , which themselves follow a Dirichlet prior:


where is the total number of frames. In our simulation experiments, we use a symmetric Dirichlet prior, setting .

2.2 Identity Model

In the daily-life scenario described in Section 1, an increasing number of unique identities will tend to appear as more faces are observed. This number is expected to grow much more slowly than the number of observations, and can be considered unbounded in practice (we do not expect a user to run out of new people to meet). Moreover, we can expect some people to be encountered much more often than others. Since a Dirichlet process (DP) [11] displays properties that mirror all of the above phenomena [29], it is a sound choice for modelling the distribution of identities.

Furthermore, the assumption that all people can potentially be encountered in any context, but with different probabilities, is perfectly captured by a hierarchical Dirichlet process (HDP) [30]. Making use of the context model, we define one DP per context , each with concentration parameter and sharing the same global DP as a base measure.222One could further allow an unbounded number of latent contexts by incorporating a nonparametric context distribution, resulting in a structure akin to the nested DP [24, 5] or the dual DP described in [32]. See Appendix 0.A for details. This hierarchical construction thus produces context-specific distributions over a common set of identities.

We consider that each of the face detections is associated to a latent identity indicator variable, . We can write the generative process as


where is the DP stick-breaking distribution, , with and . Here, is the global identity distribution and are the context-specific identity distributions.

Although the full generative model involves infinite-dimensional objects, DP-based models present simple finite-dimensional marginals. In particular, the posterior predictive probability of encountering a known identity



where is the number of observations assigned to context and identity and is the total number of observations in context .

Finally, such a nonparametric model is well suited for an open-set identification task, as it can elegantly estimate the prior probability of encountering an unknown identity:


where is the current number of distinct known identities and denotes the global probability of sampling a new identity.

2.3 Face Model

In face recognition applications, it is typically more convenient and meaningful to extract a compact representation of face features than to work directly in a high-dimensional pixel space.

We assume that the observed features of the th face, , arise from a parametric family of distributions, . The parameters of this distribution, , drawn from a prior, , are unique for each identity and are shared across all face feature observations of the same person:


As a consequence, the marginal distribution of faces is given by a mixture model: .

In the experiments reported in this paper, we used the 128-dimensional embeddings produced by OpenFace, a publicly available, state-of-the-art neural network for face recognition

[2], implementing FaceNet’s architecture and methodology [26]. In practice, this could easily be swapped for other face embeddings (e.g. DeepFace [27]) without affecting the remainder of the model. We chose isotropic Gaussian mixture components for the face features (

), with an empirical Gaussian–inverse gamma prior for their means and variances (


3 Robust Semi-Supervised Label Model

We expect to work with only a small number of labelled observations manually provided by the user. Since the final goal is to identify any observed face, our probabilistic model needs to incorporate a semi-supervised aspect, generalising the sparse given labels to unlabelled instances. Throughout this section, the terms ‘identity’ and ‘cluster’ will be used interchangeably.

One of the cornerstones of semi-supervised learning (SSL) is the premise that clustered items tend to belong to the same class [8, §1.2.2]. Building on this cluster assumption, mixture models, such as ours, have been successfully applied to SSL tasks [6]. We illustrate in Fig. 3 our proposed label model detailed below, comparing it qualitatively to nearest-neighbour classification on a toy example.

With the motivation above, we attach a label variable (a name) to each cluster (identity), here denoted . This notation suggests that there is a single true label for each observation , analogously to the observation parameters: . Finally, the observed labels, , are potentially corrupted through some noise process, . Let denote the set of indices of the labelled data. The complete generative process is presented below:

Figure 3: Hard label predictions of the proposed semi-supervised label model (right) and nearest-neighbour classification (left). Points represent unlabelled face observations, squares are labelled and the black contours on the right show identity boundaries. The proposed label model produces more natural boundaries, assigning the ‘unknown’ label (white) to unlabelled clusters and regions distant from any observed cluster, while also accommodating label noise (‘Bob’ ‘Alice’) without the spurious boundaries introduced by NN.

As mentioned previously, a related model for mixture model-based SSL with noisy labels was proposed in [6]. Instead of considering an explicit noise model for the class labels, the authors of that work model directly the conditional label distribution for each cluster. Our setting here is more general: we assume not only an unbounded number of clusters, but also of possible labels.

3.1 Label Prior

We assume that the number of distinct labels will tend to increase without bounds as more data is observed. Therefore, we adopt a further nonparametric prior on the cluster-wide labels:



is some base probability distribution over the countable but unbounded label space (e.g. strings).

333One could instead consider a Pitman–Yor process if power-law behaviour seems more appropriate than the DP’s exponential tails [21]. We briefly discuss the choice of further below.

All concrete knowledge we have about the random label prior comes from the set of observed labels, . Crucially, if we marginalise out , the predictive label distribution is simply [29]


which we will denote . Here, is the set of distinct known labels among and , the number of components with label (note that ).

In addition to allowing multiple clusters to have repeated labels, this formulation allows us to reason about unseen labels. For instance, some of the learned clusters may have no labelled training points assigned to them, and the true (unobserved) labels of those clusters may never have been encountered among the training labels. Another situation in which unseen labels come into play is with points away from any clusters, for which the identity model would allocate a new cluster with high probability. In both cases, this model gives us a principled estimate of the probability of assigning a special ‘unknown’ label.

The base measure may be defined over a rudimentary language model. For this work, we adopted a geometric/negative binomial model for the string length , with characters drawn uniformly from an alphabet of size :


where is the expected string length.

3.2 Label Likelihood

In the simplest case, we could consider , i.e. noiseless labels. Although straightforward to interpret and implement, this could make inference highly unstable whenever there would be conflicting labels for an identity. Moreover, in our application, the labels would be provided provided by a human user who may not have perfect knowledge of the target person’s true name or its spelling, for example.

Therefore, we incorporate a label noise model, which can gracefully handle conflicts and mislabelling. We assume observed labels are noisy completely at random (NCAR) [12, §II-C], with a fixed error rate :444The ‘true’ label likelihood is random due to its dependence on the unobserved prior . We thus define as its posterior expectation given the known identity labels . See Appendix 0.B for details.


Intuitively, an observed label, , agrees with its identity’s assigned label, , with probability . Otherwise, it is assumed to come from a modified label distribution, in which we restrict and renormalise to exclude . Here we use in the error distribution instead of to reflect that a user is likely to mistake a person’s name for another known name, rather than for a completely random string.

3.3 Label Prediction

For label prediction, we are only concerned with the true, noiseless labels, . The predictive distribution for a single new sample is given by


The sum in the first term is the probability of the sample being assigned to any of the existing identities that have label , while the last term is the probability of instantiating a new identity with that label.

4 Evaluation

One of the main strengths of the proposed model is that it creates a single rich representation of the known world, which can then be queried from various angles to obtain distinct insights. In this spirit, we designed three experimental setups to assess different properties of the model: detecting whether a person has been seen before (outlier detection), recognising faces as different identities in a sequence of frames (clustering, unsupervised) and correctly naming observed faces by generalising sparse user annotations (semi-supervised learning).

In all experiments, we used celebrity photographs from the Labelled Faces in the Wild (LFW) database [14].555Available at:

We have implemented inference via Gibbs Markov chain Monte Carlo (MCMC) sampling, whose conditional distributions can be found in

Appendix 0.C

, and we run multiple chains with randomised initial conditions to better estimate the variability in the posterior distribution. For all metrics evaluated on our model, we report the estimated 95% highest posterior density (HPD) credible intervals over pooled samples from 8 independent Gibbs chains, unless stated otherwise.

4.1 Experiment 1: Unknown Person Detection

In our first set of experiments, we study the model’s ability to determine whether or not a person has been seen before. This key feature of the proposed model is evaluated based on the probability of an observed face not corresponding to any of the known identities, as given by Eq. 7. In order to evaluate purely the detection of unrecognised faces, we constrained the model to a single context () and set aside the label model ().

This task is closely related to outlier/anomaly detection. In particular, our proposed approach mirrors one of its common formulations, involving a mixture of a ‘normal’ distribution, typically fitted to some training data, and a flatter ‘anomalous’ distribution

666The predictive distribution of for new identities is a wide Student’s . [7, §7.1.3].

We selected the 19 celebrities with at least 40 pictures available in LFW and randomly split them in two groups: 10 known and 9 unknown people. We used 27 images of each of the known people as training data and a disjoint test set of 13 images of each of the known and unknown people. We therefore have a binary classification setting with well-balanced classes at test time. Here, we ran our Gibbs sampler for 500 steps, discarding the first 100 burn-in iterations and thinning by a factor of 10, resulting in 320 pooled samples.

Figure 4: Results of the unknown person detection experiment on test images

In Fig. (a)a, we visualise the agreements between maximum a posteriori (MAP) identity predictions for test images:


where ranges from to , the latter indicating an unknown identity, absent from the training set, and indexes the test instances. Despite occasional ambiguous cases, the proposed model seems able to consistently group together all unknown faces, while successfully distinguishing between known identities.

As a simple baseline detector for comparison, we consider a threshold on the distance to the nearest neighbour (NN) in the face feature space [7, §5.1]. We also evaluate the decision function of a one-class SVM [25], using an RBF kernel with

, chosen via leave-one-person-out cross-validation on the training set (roughly equivalent to thresholding the training data’s kernel density estimate with bandwidth

). We compare the effectiveness of both detection approaches using ROC curve analysis.

Figure (b)b shows that, while all methods are highly effective at detecting unknown faces, scoring AUC, ours consistently outperforms, by a small margin, both the NN baseline and the purpose-designed one-class SVM. Taking the MAP prediction, our model achieves detection accuracy.

4.2 Experiment 2: Identity Discovery

We then investigate the clustering properties of the model in a purely unsupervised setting, when only context is provided. We evaluate the consistency of the estimated partitions of images into identities with the ground truth in terms of the adjusted Rand index [23, 15].

Using simulations, besides having an endless source of data with ground-truth context and identity labels, we have full control over several important aspects of experimental setup, such as sequence lengths, rates of encounters, numbers of distinct contexts and people and amount of provided labels. Below we describe the simulation algorithm used in our experiments and illustrated in Fig. 5.

In our experiments we aim to simulate two important aspects of real-world identity recognition settings: 1. Context: knowing the context (e.g. location or time) makes it more likely for us to observe a particular subset of people; and 2. Temporal consistency: identities will not appear and disappear at random but instead be present for a longer duration.

To reproduce contexts, we simulate a single session of a user meeting new people. To this end we first create a number of fixed contexts and then assign identities uniformly at random to each context. For these experiments, we defined three contexts: ‘home’, ‘work’ and ‘gym’. At any time, the user knows its own context and over time transitions between contexts. Independently at each frame, the user may switch context with a small probability.

To simulate temporal consistency, each person in the current context enters and leaves the camera frame as an independent binary Markov chain. As shown in Fig. 5 this naturally produces grouped observations. The image that is observed for each ‘detected’ face is sampled from the person’s pictures available in the database. We sample these images without replacement and in cycles, to avoid observing the same image consecutively.

Figure 5: The simulation used in Experiment 2, showing identities coming in and out of the camera frame. Identities are shown grouped by their context (far right), and shading indicates identities present in the user’s current context.

For this set of experiments, we consider three practical scenarios:

  • Online: data is processed on a frame-by-frame basis, i.e. we extend the training set after each frame and run the Gibbs sampler for 10 full iterations

  • Batch: same as above, but enqueue data for 20 frames before extending the training set and updating the model for 200 steps

  • Offline: assume entire sequence is available at once and iterate for 1000 steps

In the interest of fairness, the number of steps for each protocol was selected to give them roughly the same overall computation budget (ca. 200 000 frame-wise steps). In addition, we also study the impact on recognition performance of disabling the context model, by setting and .

Figure 6: Identity clustering consistency. Markers on the horizontal axis (

) indicate when new people are met for the first time.

We show the results of this experiment in Fig. 6. Clearly it is expected that, as more identities are met over time, the problem grows more challenging and clustering performance tends to decrease. Another general observation is that online processing produced much lower variance than batch or offline in both cases. The incremental availability of training data therefore seems to lead to more coherent states of the model.

Now, comparing Figs. (b)b and (a)a, it is evident that context-awareness not only reduces variance but also shows marginal improvements over the context-oblivious variant. Thus, without hurting recognition performance, the addition of a context model enables the prediction of context at test time, which may be useful for downstream user-experience systems.

4.3 Experiment 3: Semi-Supervised Labelling

In our final set of experiments, we aimed to validate the application of the proposed label model for semi-supervised learning with sparse labels.

In the context of face identification, we may define three groups of people:

  • Acquainted: known identity with known name

  • Familiar: known identity with unknown name

  • Stranger: unknown identity

We thus selected the 34 LFW celebrities with more than 30 pictures, and split them roughly equally in these three categories at random. From the acquainted and familiar groups, we randomly picked 15 of their images for training and 15 for testing, and we used 15 pictures of each stranger at test time only. We evaluated the label prediction accuracy as we varied the number of labelled training images provided for each acquaintance, from 1 to 15.

For baseline comparison, we evaluate nearest-neighbour classification (NN) and label propagation (LP) [35], a similarity graph-based semi-supervised algorithm. We computed the LP edge weights with the same kernel as the SVM in Section 4.1. Recall that the face embedding network was trained with a triplet loss to explicitly optimise Euclidean distances for classification [2]

. As both NN and LP are distance-based, they are therefore expected to hold an advantage over our model for classifying labelled identities.

Figure 7: Label prediction accuracy. Note that NN and LP effectively have null accuracy for the familiar and strangers groups, as they cannot predict ‘unknown’.

Figure (a)a shows the label prediction results for the labelled identities (acquaintances). In this setting, NN and LP performed nearly identically and slightly better than ours, likely due to the favourable embedding structure. Moreover, all methods predictably become more accurate as more supervision is introduced in the training data.

More importantly, the key distinctive capabilities of our model are demonstrated in Fig. (b)b. As already discussed in Section 4.1, the proposed model is capable of detecting complete strangers, and here we see that it correctly predicts that their name is unknown. Furthermore, our model can acknowledge that familiar faces belong to different people, whose names may not be known. Neither of these functionalities is provided by the baselines, as they are limited to the closed-set identification task.

5 Conclusion

In this work, we introduced a fully Bayesian treatment of the face identification problem. Each component of our proposed approach was motivated from human intuition about face recognition and tagging in daily social interactions. Our principled identity model can contemplate an unbounded population of identities, accounting for context-specific probabilities of meeting them.

We demonstrated that the proposed identity model can accurately detect when a face is unfamiliar, and is able to incrementally learn to differentiate between new people as they are met in a streaming data scenario. Lastly, we verified that our approach to dealing with sparse name annotations can handle not only acquaintances, whose names are known, but also familiar faces and complete strangers in a unified manner—a functionality unavailable in conventional (semi-) supervised identification methods.

Here we considered a fully supervised context structure. As mentioned in Section 1

, one could imagine an unsupervised approach involving global visual or non-visual signals to drive context inference (e.g. global image features, time or GPS coordinates), in addition to extensions to the face model with individual context information (e.g. clothing, speech). Yet another interesting research direction is to explicitly consider time dependence, e.g. by endowing the sequence of latent contexts with a hidden Markov model-like structure



This work was partly supported by CAPES, Brazil (BEX 1500/2015-05).


  • [1] Labeled faces in the wild: A survey. In: Kawulok, M., Celebi, M.E., Smolka, B. (eds.) Advances in Face Detection and Facial Image Analysis, pp. 189–248. Springer (2016).
  • [2] Amos, B., Ludwiczuk, B., Satyanarayanan, M.: OpenFace: A general-purpose face recognition library with mobile applications. Tech. Rep. CMU-CS-16-118, CMU School of Computer Science (2016)
  • [3] Anguelov, D., Lee, K.c., Gökturk, S.B., Sumengen, B.: Contextual identity recognition in personal photo albums. In: CVPR 2007. pp. 1–7 (2007).
  • [4] Betancourt, A., Morerio, P., Regazzoni, C.S., Rauterberg, M.: The evolution of first person vision methods: A survey. IEEE Transactions on Circuits and Systems for Video Technology 25(5), 744–760 (may 2015).
  • [5] Blei, D.M., Griffiths, T.L., Jordan, M.I.: The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM 57(2) (jan 2010).
  • [6]

    Bouveyron, C., Girard, S.: Robust supervised classification with mixture models: Learning from data with uncertain labels. Pattern Recognition

    42(11), 2649–2658 (2009).
  • [7] Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: A survey. ACM Computing Surveys 41(3), 1–58 (jul 2009).
  • [8] Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press (2006)
  • [9] Choi, J.Y., De Neve, W., Ro, Y.M., Plataniotis, K.: Automatic face annotation in personal photo collections using context-based unsupervised clustering and face information fusion. IEEE Transactions on Circuits and Systems for Video Technology 20(10), 1292–1309 (oct 2010).
  • [10] Dai, A., Storkey, A.J.: The supervised hierarchical Dirichlet process. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(2), 243–255 (2015).
  • [11] Ferguson, T.S.: A bayesian analysis of some nonparametric problems. The Annals of Statistics 1(2), 209–230 (1973),
  • [12] Frénay, B., Verleysen, M.: Classification in the presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Systems 25(5), 845–869 (2014).
  • [13]

    Gallagher, A.C., Chen, T.: Using context to recognize people in consumer images. IPSJ Transactions on Computer Vision and Applications

    1, 115–126 (2009).
  • [14] Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled Faces in the Wild: A database for studying face recognition in unconstrained environments. Tech. rep., University of Massachusetts Amherst (2007)
  • [15] Hubert, L., Arabie, P.: Comparing partitions. Journal of Classification 2(1), 193–218 (dec 1985).
  • [16] Ioffe, S.: Probabilistic linear discriminant analysis. In: Computer Vision – ECCV 2006. vol. 3954 LNCS, pp. 531–542 (2006).
  • [17] Jafri, R., Arabnia, H.R.: A survey of face recognition techniques. Journal of Information Processing Systems 5(2), 41–68 (2009).
  • [18] Le, N., Bredin, H., Sargent, G., India, M., Lopez-Otero, P., Barras, C., Guinaudeau, C., Gravier, G., da Fonseca, G.B., Freire, I.L., Patrocínio, Jr, Z., Guimarães, S.J.F., Martí, G., Morros, J.R., Hernando, J., Docio-Fernandez, L., Garcia-Mateo, C., Meignier, S., Odobez, J.M.: Towards large scale multimedia indexing: A case study on person discovery in broadcast news. In: CBMI 2017. pp. 18:1–18:6. ACM (2017).
  • [19] Li, P., Fu, Y., Mohammed, U., Elder, J.H., Prince, S.J.D.: Probabilistic models for inference about identity. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(1), 144–157 (2012).
  • [20] Perdikis, S., Leeb, R., Chavarriaga, R., Millán, J.d.R.: Context-aware learning for finite mixture models (2015),
  • [21] Pitman, J., Yor, M.: The two-parameter Poisson–Dirichlet distribution derived from a stable subordinator. The Annals of Probability 25(2), 855–900 (1997),
  • [22] Prince, S.J., Elder, J.H.: Probabilistic linear discriminant analysis for inferences about identity. In: Proceedings of the 11th IEEE International Conference on Computer Vision (ICCV 2007). IEEE (2007).
  • [23] Rand, W.M.: Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 66(336), 846–850 (dec 1971).
  • [24] Rodríguez, A., Dunson, D.B., Gelfand, A.E.: The nested dirichlet process. Journal of the American Statistical Association 103(483), 1131–1154 (sep 2008).
  • [25] Schölkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Computation 13(7), 1443–1471 (jul 2001).
  • [26] Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: A unified embedding for face recognition and clustering. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015). pp. 815–823. IEEE (jun 2015).
  • [27] Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: Closing the gap to human-level performance in face verification. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014). pp. 1701–1708. IEEE (jun 2014).
  • [28] Tapaswi, M., Bäuml, M., Stiefelhagen, R.: “Knock! Knock! Who is it?” Probabilistic person identification in TV-series. In: CVPR 2012. pp. 2658–2665 (2012).
  • [29]

    Teh, Y.W.: Dirichlet process. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning, pp. 280–287. Springer US (2010).
  • [30] Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical Dirichlet processes. Journal of the American Statistical Association 101(476), 1566–1581 (dec 2006).
  • [31] Torralba, A., Murphy, K.P., Freeman, W.T., Rubin, M.A.: Context-based vision system for place and object recognition. In: Proceedings Ninth IEEE International Conference on Computer Vision (ICCV 2003). vol. 1, pp. 273–280 (2003).
  • [32] Wang, X., Ma, X., Grimson, W.E.L.: Unsupervised activity perception in crowded and complicated scenes using hierarchical bayesian models. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(3), 539–555 (2009).
  • [33] Zhang, L., Chen, L., Li, M., Zhang, H.: Automated annotation of human faces in family albums. In: MULTIMEDIA ’03. pp. 355–358. ACM Press (2003).
  • [34] Zhao, M., Teo, Y.W., Liu, S., Chua, T.S., Jain, R.: Automatic person annotation of family photo album. In: CIVR 2006. pp. 163–172. Springer, Berlin, Heidelberg (2006).
  • [35] Zhu, X., Ghahramani, Z.: Learning from labeled and unlabeled data with label propagation. Tech. Rep. CMU-CALD-02-107, Carnegie Mellon University (2002)

Appendix 0.A Random Measure Interpretation

While the exposition in the main text considers the explicit representation of the nonparametric model in terms of weights ( and ) and atom locations (), here we also provide the interpretation in terms of random measures:


Note that, under this perspective,

Now, if we let as mentioned in footnote 2, assuming that , and , we obtain the following nested-hierarchical Dirichlet process:


replacing Eqs. 0.A.3 to 0.A.6.

Appendix 0.B Label Likelihood

Given the label prior, , we can formulate the following likelihood model:


Note that Eq. 0.B.1 depends on the unobserved label prior . Fortunately, we are able to marginalise over to obtain the following convenient result, given :


where is defined as in Eq. (14) (main paper). This straightforward equivalence arises from the fact that posterior weights in a DP follow a Dirichlet distribution and are therefore neutral: after removing one weight, the proportions between the remaining ones are independent of its value, and they simply follow a Dirichlet distribution with that component discarded.

We can then formulate an alternative likelihood, which depends on :


Although the marginalisation in Eq. 0.B.2 breaks the conditional independence of the true component labels, it gives us a simple, tractable form for the likelihoods of observed labels.

The simpler case of uniform label noise, discussed in [12], could not easily be extended to our context with infinite support, as this would result in an improper likelihood .

Appendix 0.C Gibbs Sampler Conditionals

Joint posterior:

0.c.1 Global Weights

As suggested in [30], we augment our Markov chain state with the weights of the global DP , such that the context DPs become conditionally independent and can be sampled in parallel:


where is the current number of distinct identities, is the weight of ’s base measure () and are auxiliary variables counting the total number of ‘tables’ (context-wise clusters) having ‘dish’ (global cluster) , in the Chinese restaurant analogy [30].

Finally, to sample the table counts conditioned on the global weights and identity and context assignments and , we use a similar scheme to the one presented in [10]:


where are uniformly sampled from .

0.c.2 Identity Assignments

For the unlabelled instances, we have


where , the prior predictive distribution of the observations. The Chinese restaurant franchise conditionals are given by [30]


where , i.e. the number of samples in context assigned to cluster .

The global weights are updated whenever an instance gets assigned to a new cluster, by splitting according to the stick-breaking process: sample , then set and [30].

For , there is an additional term accounting for the likelihood of the observed label:


0.c.3 Contexts


The context posterior predictive distribution is


where is the number of frames assigned to context , excluding frame .

The conditional distribution for the identities in frame can be computed via sequential application of Eq. 0.C.4:


where indexes observations within each single frame. Note that, due to exchangeability of the HDP, the order of iteration of is inconsequential.

0.c.4 Labels

Let , the number of identities with label excluding identity , and , the indices of labelled observations assigned to identity . We can then write the Gibbs identity label predictive as


where is the set of all known labels, whether allocated to components or not. Additionally, recall that the label likelihood is


The probability of assigning a label to identity , given the remaining identity labels, can be computed as


where .

First, let us consider the probability of assigning a known label to identity :


where the approximation assumes that , , which is generally the case for sensible choices of and .

We can analogously estimate the probability of assigning an unknown label to an identity as follows:


noting that for and using a similar approximation as in Eq. 0.C.12.

Finally, combining Eqs. 0.C.13 and 0.C.12, we can summarise


where means approximately proportional to.

0.c.5 Face Feature Parameters


which will be analytically tractable if and are a conjugate pair.