Families In Wild Multimedia (FIW-MM): A Multi-Modal Database for Recognizing Kinship

by   Joseph P. Robinson, et al.

Recognizing kinship - a soft biometric with vast applications - in photos has piqued the interest of many machine vision researchers. The large-scale Families In the Wild (FIW) database promoted the problem by supporting annual kinship-based vision challenges that saw consistent performance improvements. We have now begun to approach performance levels for image-based systems acceptable for practical use - something unforeseeable a decade ago. However, biometric systems can benefit from multi-modal perspectives, as information contained in multimedia can add to and complement that of still images. Thus, we aim to narrow the gap from research-to-reality by extending FIW with multimedia data (i.e., video, audio, and contextual transcripts). Specifically, we introduce the first large-scale dataset for recognizing kinship in multimedia, the FIW in Multimedia (FIW-MM) database. We utilize automated machinery to collect, annotate, and prepare the data with minimal human input and no financial cost. This large-scale, multimedia corpus allows problem formulations to follow more realistic template-based protocols. We show significant improvements in benchmarks for multiple kin-based tasks when additional media-types are added. Experiments provide insights by highlighting edge cases to inspire future research and areas of improvement. Emphasis is put on short and long-term research directions, with the overarching intent to increase the potential of systems built to automatically detect kinship in multimedia. Furthermore, we expect a broader range of researchers with recognition tasks, generative modeling, speech understanding, and nature-based narratives.



There are no comments yet.


page 1


Multi-modal Deep Analysis for Multimedia

With the rapid development of Internet and multimedia services in the pa...

EV-Action: Electromyography-Vision Multi-Modal Action Dataset

Multi-modal human motion analysis is a critical and attractive research ...

A Survey of Multimedia Technologies and Robust Algorithms

Multimedia technologies are now more practical and deployable in real li...

Families in the Wild (FIW): Large-Scale Kinship Image Database and Benchmarks

We present the largest kinship recognition dataset to date, Families in ...

Learning in High-Dimensional Multimedia Data: The State of the Art

During the last decade, the deluge of multimedia data has impacted a wid...

MovieCuts: A New Dataset and Benchmark for Cut Type Recognition

Understanding movies and their structural patterns is a crucial task to ...

Recognizing Families In the Wild (RFIW): The 4th Edition

Recognizing Families In the Wild (RFIW): an annual large-scale, multi-tr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

face recognition (FR) has progressed in ways unimaginable a decade ago [32, 36]. This holds true for specific FR problems such as visual kinship recognition, where the problem is to detect blood relatives from facial cues. The seminal work [19] in visual kinship recognition introduced the first image dataset available to the research community. Increasingly larger and more challenging datasets have been released (e.g.,  [47, 44]), only to matched by vision researchers proposing models with increasingly greater performance  [45].

1-generation 2-generation 3-generation 4-generation
# Subjects 883 824 1,542 1,914 1,954 1,892 2,041 426 463 483 526 39 30 45 37 13,099
# Families 345 334 472 666 676 665 670 154 174 178 191 9 10 11 10 953
# Still Images 40,386 31,315 46,188 83,157 89,157 57,494 63,116 8,007 6,775 6,373 6,686 408 410 798 797 441,067
# Clips 123 79 81 155 134 147 138 16 18 25 15 2 4 0 0 937
# Pairs 641 621 1,138 1,151 1,253 1,177 1,207 263 280 292 324 28 18 36 28 8,457
TABLE I: Database statistics. Types are split based on the span in generation of the relationship.

In parallel to FR, face and speaker-based problems with audio-visual data have grown popular, resulting in significant research (e.g., speaker separation [17], speaker identification [39, 13], cross-modal audio-to-visual or vice-versa [38], emotion recognition [4], along with several others  [64, 63]). Furthermore, the sudden surge of attention paid to audio-visual data has provided a platform for experts in different corners of biometrics to share recipes, combine knowledge, and develop solutions that leverage multi-domain knowledge to build more complex and complete models. The addition of multimedia for kin-based recognition, can not only enhance the current capabilities of state-of-the-art (SOTA) systems but will lead to new, interesting problems and studies which close the gap between research and reality by aligning performance metrics in research with practical use-cases).

Robinson et al. introduced a large-scale image dataset to recognize family members in still imagery, the FIW database [48, 49]. FIW, containing 1,000 families, each with an average of 13 family photos, 5 family members, and 26 faces, has challenged researchers with various views of still-image kin-based tasks. Myriad methods demonstrated the ability of machinery to use still images to determine kinship in a pair or group of subjects (Section V). Nonetheless, only so much information can be extracted from still images. The dynamics of faces in video data (e.g., mannerisms expressed across frames) contain additional information, and audio as well as text transcripts (i.e., contextual data describing the speech and other sounds) can widen the range of cues we model to discriminate between relatives and non-relatives. We propose the first large-scale multi-media dataset for kinship recognition. Specifically, we leveraged the familial data of the FIW image database to build upon the existing resource [48, 49], using the still-images of FIW and adding video, audio, visual-audio, and text data of subjects. Note that video, audio, and visual-audio differ in that the latter has the face speaking and the speech spoken are aligned, while the others are independent, unaligned clips. After its predecessor, we dubbed the database FIW-MM (Fig. 1). En route to bridging research-and-reality, we follow the protocols of FIW [45], but now with the capacity to be template-based (i.e., per National Institute of Standards and Technology (NIST) in [37]).

The contributions of the FIW-MM dataset to the FR, biometric, anthropology, and MM communities are the following:

  • Built multimedia database: a large-scale dataset for kinship recognition - FIW, made-up of still face images, was extended to include media of different modalities: video-tracks, audio segments, visual-audio clips, and text transcripts. We introduce the extended database, called FIW-MM - a completely restructured multimedia family database that better encapsulates the multimedia samples, along with metadata at the subject-level and instance-level.

  • Recorded protocols and benchmarks: a new paradigm for kinship recognition that is suited for multimedia data and a step towards deployment in real-world settings. Specifically, the problem has been modified from instance to template based. We are the first to measure kinship recognition capabilities with a large-scale, multimodal template-based collection. Analysis shows the impact the different modalities have on the performance. We do this in a systematic, controlled manner, such that the specifics of benefits for each modality are clearly revealed. Reproducible experiments with emphasis on the edge cases for different modalities are treated as essential.

FIW-MM - the data, code, trained models, and more - will soon be available online. Under the assumption that a wide-range of researchers could be attracted to the kin-based domain, the resource is accessible in various formats, with scripts to reproduce this paper in its entirety, and beyond (i.e., data exploration dash-board and data-card).

Fig. 2: Workflow for labeling data. (Bottom-Left) Folder structure of FIW-MM. For each of the 1,000 families, there are a set of members. For there, the template of a member consist of all media available.

Ii The FIW-MM Database

The underlying factor that inspired our automated labeling pipeline in Fig. 2 is that we had name labels for several subjects with one-to-many face samples each (i.e., the still-face collection of FIW). Our goal was to leverage the visual evidence, for which we have ground-truth, such to annotate multimedia data by when and where in the video events of the face and or the voice for the subject of interest take place. Particularly, we want to curate FIW by parsing each video to face-tracks cropped from videos in one folder, speech instances in another, visual-audio in yet another, and the spoken words transcribed to text in another. To accompany this are recorded time-stamps such that overlap in samples are clearly identifiable. These folders exist alongside the original image folders marked per Family ID (FID), then the relative Member ID (MID). Thus, each family folder contains MID folders– each with separate folders for face images, face tracks, speech samples, transcribed conversations, and face tracks actively speaking. We also includes the relationship matrix and the genders for each MID. A depiction of the structure of FIW-MM is shown bottom-right of Fig. 2.

Let us first review the specifications. Then, let us walk through the pipeline describing the different branches of data flow. Finally, let us review the few refinement strategies that, as simple as they may be, were surprisingly effective at reducing the required manual input from an already small amount to net closer to zero.

Ii-a Specifications

Our goal was to extend FIW in the amount of data, the types of modalities, and such to adjust the settings of experiments to template-based. Note that FIW provides name metadata and face images for an average of ¿13 individuals from 1,000 families [47]. With this, we aim to accumulate additional paired data: specifically, paired multimedia data for members of 150 families - at least 2 members each. Provided complete access to FIW for research purposes111https://web.northeastern.edu/smilelab/fiw/download.html, we leveraged this data as the knowledge needed to build FIW-MM with minimal manual labor and zero financial cost. For this, we employed SOTA models and algorithms in speech and vision throughout the data pipeline. We next step through the pipeline to describe by the careful consideration put into each module, along with the uses of several feedback refinement loops.

Ii-B Data pipeline

Inspired by previous work, such as FIW [49] (i.e., labeling families) and VoxCeleb [39] (i.e., labeling audio-visual data), aspects of both were merged as the basis of our pipeline design. Well, in essence, one of three branches that make up our data collection pipeline. Specifically, the merging of the aforementioned pipelines make-up the audio-visual branch, which processed end-to-end and in parallel with the visual and audio branches. The notion of branches is used for clarity in the following description, as each respective branch is concerned with the modality for which it is referred.

The following subsections cover the details of the pipeline built to acquire FIW-MM as the sequence of modules it grew to - the steps are covered in order of process (i.e., from left-to-right in Fig. 2). Philosophically, all data was assumed to of type non-match (i.e., zero amount of multimedia data to start). Then, there are various checkpoints throughout the branches that add data that was found to be a match with high confidence. Under the pretext that FIW-MM will be a resource used by experts from different data domains, all data points that match are saved (i.e., visual tracks, audio, and audio-visual). Nonetheless, overlapping segments are clearly annotated such to remove repeated samples (i.e., visual-audio will also be present in sets containing just visual and just audio). At the same time, if one, the other, or both modalities are of interest, then the maximum amount of data points is readily available. Note that no data points are repeated in sets created for included benchmarks.

Selecting candidate names and collecting video URLs. FIW has still image data for 1,000 families with over 13,000 family members (i.e., subjects) in total. From the families, we chose a subset of 150 for which 2-5 members appeared in 1-3 YouTube videos, with a total of 500 subjects in 605 videos. The importance of this step was in assuring that there were at least 2 members per family with multimedia; otherwise, the added modalities would have no basis to match about. Also, ethnicity for these 500 subjects were manually collected at this time (Appendix B). Video URLs were queried under unique Video IDs (i.e., for videos). Generally speaking, the videos were either interview-style (e.g., with news anchor or alone in a plain room answering scripted questions) or face-time clips (i.e., self-recordings of subject speaking directly to the camera, as is the normal case when face-timing).

Downloading videos. Our scripts used Pypi’s youtube-dl to download YouTube vidoes by URL, which were then archived under corresponding VID. Allow with the multimedia (MKV), time-stamped captions were also scraped when available– later parsed as transcribed words spoken by the subject. Alongside the text, the MKV files were processed to three files: a copy of the original MKV for the audio-visual branch, and then an audio only (WAV) and visual only (MP4) extracted with ffmpeg. From the start, all video data was assumed a constant 25 FPS.

Event recording. Before passing data down any branch, blank (sequential) tabular records were created for the duration of the video with tuples as index (i.e., time and frame number)– one record per branch (i.e., audio, visual, and audio-visual event records). These essential for refinement processes that are later activated via a feedback mechanism. In essence, the mutual information across records at a given instance (i.e., frame or time-stamp) are used to imply matches, contradiction, and non-matches across modalities (i.e., a means to propagate labels across modalities). This usage of set theory helps both to validate true matches and filter out non-matches: others have too leveraged logic and sets to parse videos [23]; however, opposed to high-level semantics such as types of objects present, we reference output of simpler tasks (e.g., face or no face, speech or non-speech, same or different face or voice)– this increases random chance. Thus, reduce low confident decisions.

Visual branch.We first split a video into scenes using two global measures under the assumption that, statistically, neighboring frames will match as close as 90%: HSV (i.e., color) and local binary patterns [3] (i.e., texture) features were extracted and used to parameterize two probabilistic representations per frame, which were compared using KL-Divergence and compared to a threshold of 0.1 [51]. This produces a set of shots for each of the videos of size , i.e., represents all shots detected in the -th video. From there the first, last and the frame in between closest to the centroid (in color and texture) of the entire track (i.e., the beginning, end, and the assumed best representation for the respective clip). The three frames per clip are then passed through a MTCNN face detector [73], and clips with no faces detected in at least one of these frames. Furthermore, the set of clips is filtered further by comparing detected faces to the ground-truth faces of FIW. Again, clips with no matches are discarded. Note that this was a means to quickly drop unwanted data. To compare faces, ArcFace encoded faces were encoded via the architecture, training details, and matcher in [15]. Specifically,


where the matcher compared the -th face detected to -th FIW face encoding  [26]. Note that it is currently assumed that and are from different sets (i.e., with labeled samples of a subject from FIW and face detections in the new video data). The matcher in Eq 1

was set as cosine similarity the closeness of the L2 normalized 

[57] encodings by . At this stage, was manually set for a high recall. In fact, this matching process (including the usage of ArcFace to encode faces) is the standard matcher we use throughout.

Next, the MTCNN outputs were generated for all frames in clips, while saving the bounding box coordinates, fiducials (i.e., 5 points), and confidence scores. Next, only continuous face tracks in clips were kept. For this, the ROI was set of the previous location of the face, and then IoU was calculated frame-by-frame, each value must surpass a threshold of 0.3. Finally, up to 25 faces were sampled uniformly from track (i.e., opposed to choosing the top based on pose information, as this yielded redundancy in similar frontal posed faces). Each was then passed to with each of the labeled faces (i.e., producing score matrix). The mean across samples was calculated to produce a single score per the faces, at which point the value at the 25-percentile was compared to . The fusion of scores was done in such a way to both consider all the existing labeled faces equally, while avoiding a few (of the ) low-quality detections having any weight. Upon this process, and with the aid of SOTA techniques mentioned throughout, this step alone yielded many face tracks matching with a high confidence.

Audio processing. Raw audio data is extracted from the videos and saved as wave-files. We first set out to do speaker diarization on each video: we aimed to have record indicating the presence of speech, from which change in speaker is marked, and, ultimately, the number of speakers in the video along with who speaks when. Note, we assume no audio labels. Thus, the speakers are arbitrarily tagged per video.

Put differently, the first purpose of this branch is to find the number of speakers per video, with predictions based on the detected speakers on who spoke when: a speech detector determined the when, and then clusters all the different speech segments to determine the number of speakers and, thus, which speech segment to assign to which of the speakers (i.e., the who). The former was implemented using PyPi’s SpeechRecognizer222https://github.com/Uberi/speech_recognition, with the latter based on models from [11]. See supplemental for further detail. Finally, parsing through segments and marking as , , …, , where is the number of speakers in a given clip. These time-stamps are later used to find the speaker of interest.

Visual-audio branch. This branch focused on detecting when the speaker is in the field of view. Thus, the aim was to detect the boundaries in each video for which the face and speech are in sync. An intuitive way to do this is to relate the faces detected and the lip movement with the audio– which is at the core of many speaker identification methods in multimedia [77]. To acquire this, videos were processed using SyncNet [9]) with the settings and trained weights from [30]. Our implementation output tracks by first trimming video from about the boundary of the detection, and then cropped out the faces using the detected bounding boxes extended 130% in all directions. From this, each track is static spatially, and with each face detection entirely enclosed. This modification made it so individual tracks were of constant size and location in pixel space; opposed to producing tracks with moving coordinates to preserve the face in the field of view (i.e., the added 30% covered this). At this points, three sets of coordinates were saved (i.e., the original detection, the extended version, and the set accounting for relative offsets for the crop). Similar to the visual branch, labeled faces from FIW were then used to determine which of these tracks belonged to the subject of interest. Once filtered, all cropped tracks were manually inspected. Thus, allowing us to tag this data as ground-truth. The events of this branch were marked (i.e., when and where the subject of interest is speaking).

Train Val Test
T 2,976 571 16,464 290 7,217 955 190 5,458 72 3,308 972 192 5,231 91 1,775
P 571 571 3,039 47 1,843 190 190 1,334 16 789 192 192 993 23 876
G 2,475 571 13,571 244 5,581 791 190 4,538 56 2,519 800 192 4,705 69 899
T 3,046 571 16,610 291 7,424 981 190 5,872 72 3,308 992 192 5,698 92 1,775
TABLE II: Task-specific counts: Individuals (I), families (F), still-face images (S), video-clips (V), audio snippets (A), audio snippets (VA) in the set of probes (P), gallery (G), and in total (T).

The pipeline outputs sets of face tracks, audio tracks, and audio-visual tracks of the subject of interest. End-to-end, the system refines via feedback, which allows higher-level detection at coarser points of filtering (Fig. 2). In other words, we compared the event records produced by each branch to produce the master event record, which included the following information (recall, event records are aligned per time-stamp and frame number): data points where the visible talking face is the subject of interest, which was propagated to the records from audio branch, as the intersection allowed for the set of audio segments for the subject of interest to be determined from the clusters of arbitrary speakers (i.e., as the events from audio-visual branch are a subset); any overlap in the data found in the visual branch or audio branch versus the audio-visual (i.e., allow for duplicate removal with ease).

Iii Problem Definitions and Protocols

The FIW-MM database is an extension of FIW [48, 49]. As such, we mimic the evaluation protocols of the most recent Recognizing Families In the Wild (RFIW) data challenge [45]. Specifically, we benchmark two kin-based tasks, verification and search & retrieval. One key difference between FIW and FIW-MM is that FIW and its protocols are uni-modal and the experiments are organized as one-shot problems. In contrast, FIW-MM contains multiple modalities and many more samples per subject (Table I). To further narrow the gap between research and reality, problems follow a template-based paradigm [37]– a first for visual kinship recognition.

Traditionally, kinship verification has been primary focus of researchers. More recently came the emergence of searching for missing children [45], which, although more challenging, comes with higher practical value. Benchmarks for both tasks with FIW-MM are included; however, opposed to the single-shot setting, FIW-MM allows for template-based [37] tasks alike operational use-cases.

For template-based experiments, known subjects (i.e., prior knowledge of identity and family) are first enrolled in a gallery. At inference, the goal of search and retrieval is to compare an unseen probe to subjects of the gallery. The verification task compares a list of probes to individual gallery subjects (i.e., one-to-one), with the solution space of either KIN or NON-KIN; kinship identification compares the probe to the entire gallery (i.e., one-to-many), with the end result being a ranked list of family members. In all cases, at least one family member exists in the gallery, making for a closed-set recognition problem.

Specifically, a template holds all of the media for a subject (i.e., face images, videos, audio-clips, and text transcripts). Hence, consist of samples , where each is an independent piece of media represented as a single encoding. For instance, a still-image encoded as via , where is mapping to a learned feature space (i.e., ). Same for continuous face tracks in videos, which we encode as single samples by average pooling the face encodings. Put formally, a face track is represented as , where is the frame count. Similarly, an audio segment (i.e., a clip where subject speaks without interruptions or major pauses) is treated as a single piece of media via average pooling all encodings to form a single representation per clip. Note that a video may consist of several independent visual, audio, and visual-audio (i.e., aligned) tracks. Thus, there are many independent media samples for both the visual and audio modalities. Again, subjects are represented by these templates made up of these various media samples , such that the subject can be represented by media samples as follows: , where corresponds to the media type and, hence, the corresponding encoder. From this, is the total number of encodings for subject . The gallery consists of a set of subjects by , where are identity labels for each of the subjects, and are ground-truth for families. To establish a precise definition for problems of kinship, each tuple also contains a tag representing the set of families (i.e., ), where . Further partitioning of the data is done per requirements of a task. For instance, for the verification, the pair of tuples from the same family , where , inherit labels KIN (i.e., match) and relationship type.

FAR/TAR (%) brother-brother (BB) sister-sister (SS) brother-sister (SIBS) father-daughter (FD) father-son (FS) mother-daughter (MD) mother-son (MS) Average
0.5 (EER) 97.8 97.8 98.2 91.5 92.3 92.7 91.7 90.8 91.5 79.8 77.8 79.9 85.3 85.3 87.1 90.6 88.8 91.4 81.3 82.6 85.2 88.3 87.9 89.8
0.3 94.1 94.1 95.3 88.0 87.2 90.1 82.9 83.9 85.7 63.5 66.5 69.3 77.1 79.1 81.5 82.4 82.0 85.0 68.9 70.1 73.4 79.6 80.4 81.6
0.1 88.1 87.4 88.4 76.1 76.1 79.1 68.7 68.2 70.2 34.5 36.9 42.9 54.3 54.3 58.2 62.2 63.1 69.4 46.1 46.5 50.1 61.4 61.8 64.9
0.01 70.4 70.4 73.6 54.7 55.6 59.9 44.2 46.1 52.4 5.9 7.9 12.9 23.6 24.0 32.1 28.3 31.3 40.6 11.6 13.3 21.0 34.1 35.5 41.1
0.001 54.8 57.0 61.1 47.9 48.7 52.4 29.5 29.0 33.7 2.0 2.5 7.7 9.3 10.9 14.1 14.2 14.6 18.5 3.3 4.6 7.8 23.0 23.9 30.1
TABLE III: TAR at FAR. We include top-performing TA scheme with various data settings: still-images only (left), +videos (middle), and +video+audio (right). Higher is better.

Following the 2020 RFIW, each task consists of a train, validation, and test set. These sets are disjoint in family and subject IDs, and are roughly split 60%, 20%, and 20% for the train, validation, and test set, respectively. Thus, the splitting is done using the family labels, and the resulting partitioning of sets is static for all tasks.

Iii-a Kinship verification

Iii-A1 Overview

Kinship verification is a challenging task within a complex topic. It inherits all the challenges of traditional FR, with aspects amplified in difficulty due to kinship being a soft attribute with high variation, bias in nature, and directional in the variety of relationship types. The most fundamental question asked in kinship verification, and re-asked in all other kinship discrimination tasks is whether a face pair is related. Therefore, kinship verification is a boolean classification of pairs (i.e., ).Knowledge of the relationship type is assumed to be known. Thus, provided the output of the model for a given pair is KIN, then the specific type is implied. Future efforts could incorporate relationship-type signals to advance capabilities of kinship detection systems; however, and as stated upfront, verification provides the simplest of all the benchmarks and, up until now, is the most popular [45].

Iii-A2 Data splits and settings

The data is organized as pairs, with pairs a part of a set of common relationship-type. Specifically, pairs are of type BB, SS, or SIBS of mixed-sex (i.e., same generation), or FD, FS, MD, or MS (i.e., difference on 1-generation). Counts for all types of relationship pairs are listed in Table I, with the aforementioned types (i.e., same and 1-generation) used in experiments provided sample sizes are such to allow for fair representations. Data splits (i.e., train, validation, and test) and their sample counts are listed in Table II. This task there has no concept of query and gallery.

Iii-A3 Metrics

The one-to-one paradigm (i.e., kinship verification) is the main view vision researchers aim to solve. The task is to determine whether a face-pair are blood relatives (i.e., true kin). Conventionally, a query consists of a single face image , which is then paired with a second face to predict against (i.e., a one-shot, Boolean classification problem with labels ). Put formally, given a set of face-pairs , where the number of sample pairs of relationship-type (i.e., ). A set of pair-lists for the types, and with the label determined by the indicator function ,


As described in the preceding section, FIW-MM, with many samples from various modalities (i.e., still-face, face-tracks, audio, and transcripts (contextual), is organized as templates. Specifically, true IDs are paired with a template of all media available for the respective subject. In contrast with conventional kinship recognition, where one image is compared to another, the one-to-one paradigm is based on templates (i.e., one template is compared to another). For consistency, given as a pair of templates for different subjects (i.e., and , where ).

Detection Error Trade-off (DET) curves, along with average verification accuracy, were used for kinship verification. As too were TAR across intervals of FAR (Table III).

(a) DET curve (verification).
(b) CMC curve (identification).
Fig. 3: Plotted results. Included are still-images S, video clips V, and audio segments A, with still-images and video were fused S+V, and also still images, audio, and video were fused S+V+A.

Iii-B Search & retrieval (missing child)

Iii-B1 Overview

Kinship identification is organized as a many-to-many search and retrieval task, with each subject having one-to-many media samples. Thus, we imitate template-based evaluation protocols [37]. The goal is to find relatives of search subjects (i.e., probes) in a search pool (i.e., gallery).

Iii-B2 Data splits and settings

A gallery is queried by a set of probes for search and retrieval, where is the -th template in and is the template of the -th query subject. As mentioned, a template consists of samples of various modalities. Given a template of multimedia, various schemes were applied to integrate the identity information from all media components of .

Iii-B3 Metrics

Scores of missing children are calculated as

where average precision (AP) is a function of family (i.e., for true-positive rate (TPR)). Then, all AP scores are averaged to find the mAP score as follows:

Also, Cumulative Matching Characteristic (CMC) curves as a function of rank will traced out for further analysis between different attempts [14], along with rank 1, 5, and 10 accuracy.

Iv Benchmarks

Iv-a Methodology

The problems of FIW-MM have various views– multi-source and multi-modal, with the former varying in samples, and treating the different media-types independently until the matching function outputs scores (i.e., late-fusion); the latter demands a method for early fusion (e.g., feature-level), which should enhance performance by leveraging informative samples while ignoring noisy and less discriminative samples. We next describe the modality-specific features (i.e., encoding different media types), and early fusion.

Iv-A1 Vision

FR performance traditionally focuses on verification– popularized by the Labeled Faces in the Wild dataset [26] (images) and the YouTubeFaces dataset [66] (videos). In contrast, the newer IJB-[A,B,C] FR datasets [37] unifies evaluation of one-to-many face identification with one-to-one face verification over templates (i.e., sets of imagery and videos for a subject). Then, visual kinship recognition research followed a similar path, addressing the simpler verification task. FIW-MM provides the data needed to run template-based kin recognition experiments.

We demonstrate results from a variety of naive fusion techniques (e.g., average pooling of features or voting of scores). To no surprise, the score-based fusion outperforms the naive feature-level fusion schemes. Specifically, the mean of all scores, both within a template and comparing templates (Table III). The gain from each added modality is clear from just the naive score-fusion.

As mentioned, naive fusion methods at the feature level are an ineffective way of combining knowledge. Provided a collection of media - media that varies in modality, quality and discriminative power - a simple, unweighted average across the items of a template does not exploit all available information. To better fuse the template, we adapt a model to the template to best represent the subject for verification or identification of family members. Details are provided right after the description of audio features.

Iv-A2 Speech

All speech segments were encoded a SOTAdeep learning architecture [11]. Specifically, we trained SqueezeNet [27] as a 34-layer ResNet [24] with a angular prototypical loss and optimized with Adam [28] to transform WAV-encoded audio files to a single encoding (i.e., , where ). Angular prototypical loss  [52] learns a metric alongside softmax to minimize within-class scatter (i.e., penalty formed as the sum of euclidean distances from all samples of a subject in a mini-batch from the mean centroid of the respective mini-batch). Specifically, a support set and a query are set in each mini-batch on a subject-by-subject basis, with made-up of a single utterance to compare with the centroid of that consists of all other samples in the mini-batch for that class. Angular prototypical takes advantage of the perks of using centroid prototypes, while enhancing by following generalised end-to-end (GE2E[56]

usage of a cosine-based similarity metric. This is scale invariant, is more robust to feature variance, and facilitates stability in convergence during training 


Iv-A3 Feature Fusion

TA [62]

, a form of transfer learning that fuses the deep encodings of many labeled faces from a source domain with a template specific

Support Vector Machines trained on the target domain. For kinship verification we employ probe adaptation, while gallery adaptation is for identification (i.e., search & retrieval). Thus, we adapted the concept of TA in all benchmarks.

Specifically, a similarity function , for probe and reference template , is learned for a given probe (i.e., template). For this, an SVM is trained on top of the face encodings with media in as the positive samples and the set of negatives mined by taking a single sample from subjects in the train set (i.e., ). For verification, this process repeats for another SVM (i.e., the template of the subject in question). Negatives were set in same way. Then, let represent the evaluation of media encodings of template upon being trained on . We do this in both direction via


The score produced is the result of the templates fused together from media to an SVM and then to a score.

The benefit of SVMs is in the kernel. Specifically, this linear, max-margin modeling scheme has proven effective at separating non-linear feature space for boolean classes and , where for instances of the same () and different () classes. Thus, the implicit embedding function (i.e., kernel) projects the encoding pair to a non-linear space such that the SVM

learns the best hyperplane

to separates the two classes by (1) maximizing the margin and (2) minimizing the loss on the training set. Then, the predicted class is inferred as . Specifically, we used dlib’s SVM, L2 regularized cosine-loss with class-weighted hinge-loss, i.e.,

@1 @5 @10 @20 @50 mAP
img mean 0.29 0.43 0.54 0.64 0.78 0.13
median 0.28 0.44 0.52 0.64 0.77 0.13
max 0.11 0.19 0.28 0.34 0.52 0.06
TA 0.31 0.43 0.52 0.63 0.74 0.14
img+video mean 0.30 0.44 0.52 0.64 0.77 0.14
median 0.28 0.44 0.50 0.63 0.76 0.14
max 0.13 0.21 0.26 0.30 0.44 0.06
TA 0.34 0.46 0.55 0.68 0.75 0.16
img+video+audio mean 0.30 0.44 0.52 0.64 0.77 0.14
median 0.28 0.44 0.50 0.63 0.76 0.14
max 0.13 0.21 0.26 0.30 0.44 0.06
TA 0.56 0.59 0.63 0.74 0.78 0.24
TABLE IV: Identification results with early fusion colored.

To adapt this for the notion of a gallery, the settings are set for gallery adaptation: train a similarity function from a probe to gallery . A gallery of templates , set such that all pairs are used to train the SVM (i.e., the scoring function ). The difference between probe adaptation and gallery adaptation is in the negative sets. Along with the sample per subject trained against for probe adaptation, global adaptation samples all other templates in as additional negatives. Again, . Nonetheless, the class imbalance for all cases is handled via class-weighted hinge-loss in Eq 4, with and (i.e., inversely proportionate to class frequency).

Iv-B Results

As hypothesized, a system’s ability to discriminate is improved with each added modality (Fig 2(a) and 2(b), Table III and IV). Considering the benchmarks use conventional speech and FR technology, and our hypothesis that video and audio boosts discrimination, much promise reflects– these notable improvements would likely continue to climb provided a more sophisticated or specific solution. It would be interesting to fuse earlier on than done here, and train machinery jointly with audio-visual data. This way, more complex dynamics of facial appearance, along with the corresponding sound of voice, could further improve and give additional insights.

Iv-C Discussion

The template-based protocol adds practical value by mimicking the more likely structure posed in operational settings, per NIST [37]. Besides, several other factors make it a more interesting formulation; therefore, a higher potential for researchers to show-off their creativity. For instance, opposed to using a single sample per subject (i.e., one-shot learning), each now is represented in a set of media (i.e., a template). This begs the questions: how best to fuse knowledge from multiple samples; how best to incorporate evidence from different modalities; how best to learn from all available data, while allowing for one-to-many types of media as input.

Another consequence of using templates is that the random chance is increased, which simply stems from (1) the additional knowledge available to pool (or fuse) from multiple modalities and (2) the gallery size reduces from tens of thousands by nearly ten-fold. The latter is not an implication of lessor difficulty, but the byproduct of reducing bias in data [46]. That is, opposed to having one-to-many samples per subject, there is just one template; mitigating certain sources of data imbalance (i.e., whether there are thirty samples or just one, a system’s ability to recognize a particular pairing or group effects the metric evenly for all). In other words, in one extreme a system may easily recognize a specific parent-child pair - regardless of face sample count and, consequently, the number of face pairs, the impact on the metric is proportional to the number of unique pairs, not sample pairs.

V Related Work

V-a Kinship recognition

Early on, it was not only kinship in people that researchers sought to understand, but domesticated animals, e.g., dog [25] and sheep [42, 41]. Evolution has allowed many species to acquire that ability to recognize their kin through various signals (i.e., touch, smell, visual, and acoustics in particular for human). From this, we imply that different types of media, besides image-level or conventional speech recognition, can detect kinship. In fact, imagery and speech signals are not best– a more complex signal, such as dynamic features across video frames, can attribute inheritable characteristics (e.g., expressions, mannerisms, and accents from emotion).

Computer vision researchers started to focus on using facial cues to recognize kinship about a decade ago, at which time Feng et al. proposed a solution based on modeling the geometry, color, and low-level visual descriptors of the face [19]. Following this, others formulated the problem as transfer subspace learning [70, 71], 3D faces modeling [55], learning facial descriptor [76], sparse encoding [18], metric learning [35], tri-subject verification [44], adversarial learning [74], ensemble learning [60], video understanding [75, 54, 21], and even video-audio kinship understanding [68].

Introduced in the 2016 proceedings of ACM MM, Robinson et al.  proposed FIW as the first large-scale image dataset for kinship recognition [48, 49]. FIW has labeled data for 1,000 families, each with about 13 family photos. It came with benchmarks for 11 pair-wise types, with the top performance of the baselines being a fine-tuned CNNs (i.e., SphereFace [32] and Center-loss [61]). This was the beginning of big data in kin-based vision tasks– deep learning could then be used to overcome observed failure cases [59, 69]. Furthermore, new applications such as child appearance prediction  [22, 20] and familial privacy protection [29] were done recently.

Besides the different use-cases, and independent research work that spun off FIW, part of the reported motivation was providing an annual data challenges with the data, i.e., the RFIW [50, 46] series. Many great attempts on the still-images were a byproduct of these [31, 16]. Recent surveys [43], tutorials [47], and challenge papers [34, 33, 67, 45] summarize the progress in greater detail.

V-B Audio-visual data

The archetypal big data resources for audio-visual identification problems are Voxceleb [39] and Voxceleb2 [13]. Similar to FIW-MM, they too are extensions of still image datasets (i.e., Voxceleb and Voxceleb2 extended of the VGGFace 1 [40] and 2 [6]

, respectively). At this point, Voxceleb primarily usage in in tasks concerning the speaker: using the audio-visual data to detect and classify the speaker as

who and when [17]; to enhance speech [2], and detect when and where the face shown is speaking [10], such that the speaker is clearly visible, while the audio are words predicted to match the lip-reading [1]. Lip-reading actually predates the larger Voxceleb with lip-reading datasets [8, 7]. Most notable was the extent to which these databases were instrumental in applied research (e.g., generating talking faces [12], where the input is a still-image face and a stream of audio, and the output are frames mocking the audio with the faces as if the input face was regurgitating the audio clip). In [65], face frames were generated from a still-image and and audio clip, with pose information added as a control signal for the synthesized output. Furthermore, Voxceleb predicted emotion labels via its own signals to automatically infer ground-truth [5].

Vi Discussion

Vi-a Future Work

FIW-MM poses new challenges in automatic kinship recognition and understanding. A next step for research involves the benefit of gathering experts of different domains, such as those in sequence-to-sequence modeling, whether visual (i.e., video), audio (i.e., speech), contextual (e.g., conversations, parts-of-speech, etc.), or early-fusing pairs or groups.

We expect experts of anthropology, genealogy, and related, to be of higher value to machine-vision researchers (i.e., help to identify some of the many hidden patterns that relate families in multimedia). As the simple base-case, let us consider just audio. Models can, as we had done, encode speech borrowing technique from the speech recognition domain. Nonetheless, attributes such as accents, commonly used phrases, speaker demeanor, could not only boost a system’s performance, but also provide insights by interpretation. In a similar light, studies could focus on familial language components, and changes therein from one generation to the next; even the same generation (i.e., commonalities and differences in the spoken tendencies of siblings). Plus, the potential grows with audio-visual data (e.g., capture mannerisms dynamically - answer questions such as does she have her mother’s smile).

The data mining potential is noteworthy (i.e., directly, or even indirectly like is the case by doing to acquire the data of FIW-MM to begin with). Nonetheless, the family trees, abundance of data points, rich metadata for individuals and relationships among, and now multimedia data– FIW-MM could serve as a basis for group-based (social) data mining. Additional data can enhance or target specific nature-based studies, traditional ML-based audio, visual, and audio-visual tasks, or even further curate the very dataset.

Fusing audio-visual data, in general, are abundantly unclear and unanswered [53]. From the model training, to improvements made to deal with the modal incompleteness, to the data processing, to modal (or sample) data imbalance; from the underlining roots of the problem to the high-level semantics, similar to contemporary multi-modal systems for biometrics with audio-visual data, FIW-MM and, thus, this work in its entirety, poses more problems than it solves; we introduce a much larger problem space than that of solutions.

Other directions are on fusion. For experiments, we included early and late fusion by joining the different media as features and scores, respectively. Scores were fused naively (i.e

., averaged). Hence, ignoring the signal type, and assuming all samples and media types should be weighted uniformly. Thus, the problem is now posed with various points of fusing– whether it be cross-modality, the choice of highest quality samples, or some sort of decision tree based on media types output independently. This concept alone is vast in empty solution space– whether data fusion, where the input is then clips of aligned audio-visual data; early-fusion, which was exemplified with

TA fusing the features; or late-fusion, also demonstrated by averaging scores, but could have just as easily been guided by a more clever decision tree mechanism. Besides, meta-knowledge, like relationship types (e.g., directional relationships that inherently exists), genders, age, and other attributes, could indicate final decisions. Hence, there are vast fusion paradigms– none are trivial; most hold promise.

Research topics to spawn off the proposed is vast, to say the least; the specifics suggested here are limited by our perception. We expect scholars and experts of different domains to seek out paradigms not thought of by us in the moment. Hence, whether it be an improved variant of adapting templates and feature fusion (

e.g., like in [72]), deciding when to fuse, a new method of integration, along with the integration details, are open research questions. The data outweighs the benchmarks. This is by design, as the resource will be made available for researchers. Even a complete characterization of the contents (i.e., ablation studies) like on the effects of template sizes, media type versus relationship types, or even high-level interpretations (e.g., smiling faces versus neutral).

Vi-B Conclusion

We introduced new paradigms (i.e., template-based) for kin-based vision tasks with the proposed FIW-MM database - an extension of the large-scale FIW image collection, FIW-MM contains audio, video (i.e., face tracks), audio-visual (i.e., face-speech aligned), and text transcripts for 2 or more members from 120 / 1,000 families of FIW. Our laneling pipeline uses evidence from all modalities via a simple feedback schema based on the labeled data of FIW. Benchmarks show improved performance with each added media type, which, furthermore, is then improved further by early fusion. FIW-MM marks another major milestone for the kin-based problems space that welcomes a wider-range of experts to the domain.


  • [1] T. Afouras, J. S. Chung, and A. Zisserman (2018) Deep lip reading: a comparison of models and an online application. In INTERSPEECH, Cited by: §V-B.
  • [2] T. Afouras, J. S. Chung, and A. Zisserman (2018) The conversation: deep audio-visual speech enhancement. arXiv:1804.04121. Cited by: §V-B.
  • [3] T. Ahonen, A. Hadid, and M. Pietikainen (2006) Face description with local binary patterns: application to face recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) 28 (12), pp. 2037–2041. Cited by: §II-B.
  • [4] S. Albanie, A. Nagrani, A. Vedaldi, and A. Zisserman (2018) Emotion recognition in speech using cross-modal transfer in the wild. In ACM on International Conference on Multimedia (MM), Cited by: §I.
  • [5] S. Albanie, A. Nagrani, A. Vedaldi, and A. Zisserman (2018) Emotion recognition in speech using cross-modal transfer in the wild. In ACM on International Conference on Multimedia (MM), Cited by: §V-B.
  • [6] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman (2018) VGGFace2: a dataset for recognising faces across pose and age. In Conference on Automatic Face and Gesture Recognition, Cited by: §V-B.
  • [7] J. S. Chung, A. Senior, O. Vinyals, and A. Zisserman (2017) Lip reading sentences in the wild. In

    Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §V-B.
  • [8] J. S. Chung and A. Zisserman (2016) Lip reading in the wild. In Asian Conference on Computer Vision (ACCV), Cited by: §V-B.
  • [9] J. S. Chung and A. Zisserman (2016) Out of time: automated lip sync in the wild. In Workshop on Multi-view Lip-reading, ACCV, Cited by: §II-B.
  • [10] J. S. Chung and A. Zisserman (2017) Lip reading in profile. In British Machine Vision Conference (BMVC), Cited by: §V-B.
  • [11] J. S. Chung, J. Huh, S. Mun, M. Lee, H. S. Heo, S. Choe, C. Ham, S. Jung, B. Lee, and I. Han (2020) In defence of metric learning for speaker recognition. arXiv:2003.11982. Cited by: §II-B, §IV-A2.
  • [12] J. S. Chung, A. Jamaludin, and A. Zisserman (2017) You said that?. In British Machine Vision Conference (BMVC), Cited by: §V-B.
  • [13] J. S. Chung, A. Nagrani, and A. Zisserman (2018) Voxceleb2: deep speaker recognition. arXiv:1806.05622. Cited by: §I, §V-B.
  • [14] B. DeCann and A. Ross (2013) Relating roc and cmc curves via the biometric menagerie. In 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1–8. Cited by: §III-B3.
  • [15] J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019) Arcface: additive angular margin loss for deep face recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4690–4699. Cited by: §II-B.
  • [16] Q. Duan and L. Zhang (2017) AdvNet: adversarial contrastive residual net for 1 million kinship recognition. In RFIW Workshop in ACM MM, Cited by: §V-A.
  • [17] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein (2018) Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation. arXiv:1804.03619. Cited by: §I, §V-B.
  • [18] R. Fang, A. Gallagher, T. Chen, and A. Loui (2013) Kinship classification by modeling facial feature heredity. In International Conference on Image Processing (ICIP), Cited by: §V-A.
  • [19] R. Fang, K. D. Tang, N. Snavely, and T. Chen (2010) Towards computational models of kinship verification. In International Conference on Image Processing (ICIP), Cited by: §I, §V-A.
  • [20] P. Gao, S. Xia, J. Robinson, J. Zhang, C. Xia, M. Shao, and Y. Fu (2019) What will your child look like? dna-net: age and gender aware kin face synthesizer. arXiv:1911.07014. Cited by: §V-A.
  • [21] M. Georgopoulos, Y. Panagakis, and M. Pantic (2020) Investigating bias in deep face analysis: the kanface dataset and empirical study. arXiv:2005.07302. Cited by: §V-A.
  • [22] F. S. Ghatas and E. E. Hemayed (2020) GANKIN: generating kin faces using disentangled gan. SN Applied Sciences 2 (2), pp. 1–10. Cited by: §V-A.
  • [23] I. U. Haq, K. Muhammad, T. Hussain, S. Kwon, M. Sodanil, S. W. Baik, and M. Y. Lee (2019) Movie scene segmentation using object detection and set theory. International Journal of Distributed Sensor Networks 15 (6), pp. 1550147719845277. Cited by: §II-B.
  • [24] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §IV-A2.
  • [25] P. G. Hepper (1994) Long-term retention of kinship recognition established during infancy in the domestic dog. Behavioural processes 33 (1-2), pp. 3–14. Cited by: §V-A.
  • [26] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller (2007) Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report UMass, Amherst. Cited by: §II-B, §IV-A1.
  • [27] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer (2016) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv:1602.07360. Cited by: §IV-A2.
  • [28] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv:1412.6980. Cited by: §IV-A2.
  • [29] C. Kumar, R. Ryan, and M. Shao (2020) Adversary for social good: protecting familial privacy through joint adversarial attacks. In

    Conference on Artificial Intelligence (AAAI)

    Cited by: §V-A.
  • [30] Y. Li, M. Murias, S. Major, G. Dawson, K. Dzirasa, L. Carin, and D. E. Carlson (2017) Targeting eeg/lfp synchrony with neural nets. In Advances in Neural Information Processing Systems (NIPS), pp. 4621–4631. Cited by: §II-B.
  • [31] Y. Li, J. Zeng, J. Zhang, A. Dai, M. Kan, S. Shan, and X. Chen (2017) KinNet: fine-to-coarse deep metric learning for kinship verification. In RFIW Workshop in ACM MM, Cited by: §V-A.
  • [32] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song (2017) SphereFace: deep hypersphere embedding for face recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I, §V-A.
  • [33] J. Lu, J. Hu, V. E. Liong, X. Zhou, A. Bottino, I. Ul, T. Figueiredo Vieira, X. Qin, X. Tan, and S. Chen (2015) Kinship verification in the wild evaluation. In Conference on Automatic Face and Gesture Recognition, Cited by: §V-A.
  • [34] J. Lu, J. Hu, X. Zhou, J. Zhou, M. Castrillón-Santana, J. Lorenzo-Navarro, L. Kou, Y. Shang, A. Bottino, and T. Figuieiredo Vieira (2014) Kinship verification in the wild: the first kinship verification competition. In IEEE International Joint Conference on Biometrics, Cited by: §V-A.
  • [35] J. Lu, X. Zhou, Y. Tan, Y. Shang, and J. Zhou (2014) Neighborhood repulsed metric learning for kinship verification. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI) 36 (2), pp. 331–345. Cited by: §V-A.
  • [36] I. Masi, Y. Wu, T. Hassner, and P. Natarajan (2018) Deep face recognition: a survey. In 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pp. 471–478. Cited by: §I.
  • [37] B. Maze, J. Adams, J. A. Duncan, N. Kalka, T. Miller, C. Otto, A. K. Jain, W. T. Niggel, J. Anderson, J. Cheney, et al. (2018) Iarpa janus benchmark-c: face dataset and protocol. In International Conference on Biometrics (ICB), Cited by: §I, §III-B1, §III, §III, §IV-A1, §IV-C.
  • [38] A. Nagrani, S. Albanie, and A. Zisserman (2018) Seeing voices and hearing faces: cross-modal biometric matching. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I.
  • [39] A. Nagrani, J. S. Chung, and A. Zisserman (2017) Voxceleb: a large-scale speaker identification dataset. arXiv:1706.08612. Cited by: §I, §II-B, §V-B.
  • [40] O. M. Parkhi, A. Vedaldi, and A. Zisserman (2015) Deep face recognition. In British Machine Vision Conference, Cited by: §V-B.
  • [41] P. Poindron, F. Lévy, and M. Keller (2007) Maternal responsiveness and maternal selectivity in domestic sheep and goats: the two facets of maternal attachment. Developmental Psychobiology: The Journal of the International Society for Developmental Psychobiology 49 (1), pp. 54–70. Cited by: §V-A.
  • [42] P. Poindron, A. Terrazas, M. Oca, N. Serafín, and H. Hernandez (2007-07) Sensory and physiological determinants of maternal behavior in the goat (capra hircus). Hormones and behavior 52, pp. 99–105. External Links: Document Cited by: §V-A.
  • [43] X. Qin, D. Liu, and D. Wang (2019) A literature survey on kinship verification through facial images. Neurocomputing. Cited by: §V-A.
  • [44] X. Qin, X. Tan, and S. Chen (2015) Tri-subject kinship verification: understanding the core of a family. CoRR abs/1501.02555. Cited by: §I, §V-A.
  • [45] J. Robinson, Y. Yin, Z. Khan, M. Shao, S. Xia, M. Stopa, S. Timoner, M. Turk, R. Chellappa, and Y. Fu (2020) Recognizing families in the wild: the 4th edition. In Conference on Automatic Face and Gesture Recognition, Cited by: §I, §I, §III-A1, §III, §III, §V-A.
  • [46] J. P. Robinson, G. Livitz, Y. Henon, C. Qin, Y. Fu, and S. Timoner (2020) Face recognition: too bias, or not too bias?. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, Cited by: §IV-C, §V-A.
  • [47] J. P. Robinson, M. Shao, and Y. Fu (2018) To recognize families in the wild: a machine vision tutorial. In ACM on International Conference on Multimedia (MM), Cited by: §I, §II-A, §V-A.
  • [48] J. P. Robinson, M. Shao, Y. Wu, and Y. Fu (2016) Families in the wild (fiw): large-scale kinship image database and benchmarks. In ACM on International Conference on Multimedia (MM), Cited by: §I, §III, §V-A.
  • [49] J. P. Robinson, M. Shao, Y. Wu, H. Liu, T. Gillis, and Y. Fu (2018) Visual kinship recognition of families in the wild. IEEE Trans. on Pattern Analysis and Machine Intelligence. Cited by: §I, §II-B, §III, §V-A.
  • [50] J. P. Robinson, M. Shao, H. Zhao, Y. Wu, T. Gillis, and Y. Fu (2017) Recognizing families in the wild (rfiw). In RFIW Workshop in ACM MM, Cited by: §V-A.
  • [51] E. Sánchez-Nielsen, F. Chávez-Gutiérrez, J. Lorenzo-Navarro, and M. Castrillón-Santana (2017) A multimedia system to produce and deliver video fragments on demand on parliamentary websites. Multimedia Tools and Applications 76 (5), pp. 6281–6307. Cited by: §II-B.
  • [52] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NIPS), pp. 4077–4087. Cited by: §IV-A2.
  • [53] X. Song, H. Chen, Q. Wang, Y. Chen, M. Tian, and H. Tang (2019)

    A review of audio-visual fusion with machine learning

    In Journal of Physics: Conference Series, Vol. 1237, pp. 022144. Cited by: §VI-A.
  • [54] Y. Sun, J. Li, Y. Wei, and H. Yan (2018) Video-based parent-child relationship prediction. In 2018 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4. Cited by: §V-A.
  • [55] V. Vijayan, K. W. Bowyer, P. J. Flynn, D. Huang, L. Chen, M. Hansen, O. Ocegueda, S. K. Shah, and I. A. Kakadiaris (2011) Twins 3d face recognition challenge. In IEEE International Joint Conference on Biometrics (IJCB), pp. 1–7. Cited by: §V-A.
  • [56] L. Wan, Q. Wang, A. Papir, and I. L. Moreno (2018) Generalized end-to-end loss for speaker verification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4879–4883. Cited by: §IV-A2.
  • [57] F. Wang, X. Xiang, J. Cheng, and A. L. Yuille (2017) Normface: l2 hypersphere embedding for face verification. In ACM on International Conference on Multimedia (MM), pp. 1041–1049. Cited by: §II-B.
  • [58] J. Wang, F. Zhou, S. Wen, X. Liu, and Y. Lin (2017) Deep metric learning with angular loss. In IEEE International Conference on Computer Vision (ICCV), pp. 2593–2601. Cited by: §IV-A2.
  • [59] S. Wang, J. P. Robinson, and Y. Fu (2017) Kinship verification on families in the wild with marginalized denoising metric learning. In Conference on Automatic Face and Gesture Recognition, Cited by: §V-A.
  • [60] W. Wang, S. You, S. Karaoglu, and T. Gevers (2020) Kinship identification through joint learning using kinship verification ensemble. External Links: 2004.06382 Cited by: §V-A.
  • [61] Y. Wen, K. Zhang, Z. Li, and Y. Qiao (2016) A discriminative feature learning approach for deep face recognition. In European Conference on Computer Vision (ECCV), Cited by: §V-A.
  • [62] C. Whitelam, E. Taborsky, A. Blanton, B. Maze, J. Adams, T. Miller, N. Kalka, A. K. Jain, J. A. Duncan, K. Allen, et al. (2017) Iarpa janus benchmark-b face dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, Cited by: §IV-A3.
  • [63] O. Wiles, A.S. Koepke, and A. Zisserman (2018)

    Self-supervised learning of a facial attribute embedding from video

    In British Machine Vision Conference (BMVC), Cited by: §I.
  • [64] O. Wiles, A.S. Koepke, and A. Zisserman (2018) X2Face: a network for controlling face generation by using images, audio, and pose codes. In European Conference on Computer Vision (ECCV), Cited by: §I.
  • [65] O. Wiles, A. Sophia Koepke, and A. Zisserman (2018) X2face: a network for controlling face generation using images, audio, and pose codes. In European Conference on Computer Vision (ECCV), pp. 670–686. Cited by: §V-B.
  • [66] L. Wolf, T. Hassner, and I. Maoz (2011) Face recognition in unconstrained videos with matched background similarity. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 529–534. Cited by: §IV-A1.
  • [67] X. Wu, E. Boutellaa, X. Feng, and A. Hadid (2016) Kinship verification from faces: methods, databases and challenges. In Conference on Signal Processing, Communications and Computing, Cited by: §V-A.
  • [68] X. Wu, E. Granger, T. H. Kinnunen, X. Feng, and A. Hadid (2019) Audio-visual kinship verification in the wild. International Conference on Biometrics (ICB). Cited by: §V-A.
  • [69] Y. Wu, Z. Ding, H. Liu, J. Robinson, and Y. Fu (2018) Kinship classification through latent adaptive subspace. In Conference on Automatic Face and Gesture Recognition, Cited by: §V-A.
  • [70] S. Xia, M. Shao, and Y. Fu (2011) Kinship verification through transfer learning. In International Joint Conferences on AI (IJCAI), Cited by: §V-A.
  • [71] S. Xia, M. Shao, J. Luo, and Y. Fu (2012) Understanding kin relationships in a photo. IEEE Trans. on Multimedia 14 (4), pp. 1046–1056. Cited by: §V-A.
  • [72] L. Xiong, J. Karlekar, J. Zhao, Y. Cheng, Y. Xu, J. Feng, S. Pranata, and S. Shen (2017)

    A good practice towards top performance of face recognition: transferred deep feature fusion

    arXiv:1704.00438. Cited by: §VI-A.
  • [73] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23 (10), pp. 1499–1503. Cited by: §II-B.
  • [74] L. Zhang, Q. Duan, D. Zhang, W. Jia, and X. Wang (2020) AdvKin: adversarial convolutional network for kinship verification. IEEE Transactions on Cybernetics. Cited by: §V-A.
  • [75] L. Zhang, K. Ma, H. Nejati, L. Foo, T. Sim, and D. Guo (2014) A talking profile to distinguish identical twins. Image and Vision Computing. Cited by: §V-A.
  • [76] X. Zhou, J. Hu, J. Lu, Y. Shang, and Y. Guan (2011) Kinship verification from facial images under uncontrolled conditions. In Proceedings of the 19th ACM international conference on Multimedia, Cited by: §V-A.
  • [77] H. Zhu, M. Luo, R. Wang, A. Zheng, and R. He (2020) Deep audio-visual learning: a survey. arXiv:2001.04758. Cited by: §II-B.