1 Introduction
Current deep neural networks require significant quantities of data to train for a new task. When only limited labelled data is available, meta-learning approaches train a network initialisation on other
source tasks, so it is suitable for fine-tuning to new few-shot target tasks [Finn2017]. Often, training data samples have additional properties, which we collectively refer to as context, readily available through metadata. We give as an example the alphabet in a few-shot character recognition task (visualised in Fig. 1). This is distinct from multi-label problems as we pursue invariance to the context (i.e. alphabet), so as to generalise to unseen contexts in fine-tuning, rather than predicting its label.In this work, we focus on problems where the target task is not only novel but does not have the same context as tasks seen during training. This is a difficult problem for meta-learners, as they can over fit on context knowledge to generate an initialisation, which affects the suitability for fine-tuning for tasks in novel contexts. Prior works on meta-learning have not sought to exploit context, even when readily available [Finn2017, Rusu2019, Sun, Antoniou2018, Finn2018, Sun2019, Nichol, Bertinetto2018, Snell2017, Vinyals2016, Ren2018, Requeima2019, Tseng2020]. We thus propose a meta-learning framework to tackle both task-generalisation and context-agnostic objectives, jointly. As with standard meta-learning approaches, we aim for trained weights that are suitable for few-shot fine-tuning to target. Note that concepts of context and domain might be incorrectly mixed, when there are clear distinctions. Domains can be thought of as different datasets, whereas context is one or more distractor signals within a dataset (e.g. font or writer for character classification), and can be either discrete or continuous.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Figure 2 presents an overview of the proposed framework, illustrated on the application of character classification. We assume that both task labels (e.g. character classification) and context labels (e.g. alphabet) are available for the training data. At each iteration of meta-learning, we randomly pick a task (Fig. 2(a)), and optimise the model’s weights for both task-generalisation (Fig. 2(c)) and context-agnosticism (Fig. 2(d)) objectives. This is achieved through keeping two copies of the model’s weights (Fig. 2(b)), one for each objective, and then updating the primary weights with a mixture of both results (Fig. 2(e)). These learnt weights are not only task-generalisable but importantly have been trained in an adversarial manner on context labels.
To demonstrate the generality of our framework, and the opportunities in considering context, we (1) show that it is applicable to three commonly used few-shot meta-learning algorithms [Finn2017, Antoniou2018, Nichol], and (2) test our context-agnostic meta-learning framework on two diverse problems, showing clear improvements compared to prior work and baselines. The first problem is Omniglot character classification [Lake2015]. We show that when using an alphabet-based split (Fig. 1(b)), our approach improves over non context-aware meta-learning approaches by 4.3%. The second is predicting energy expenditure of people performing daily activities from video [Tao]. For this problem, we consider calorie prediction (i.e. regression) as the task, and the distinct individuals as the context. Tested on leave-one-person-out, we show that our approach drops the Mean Square Error (MSE) from 2.0 to 1.4.
2 Related Work
Few-shot Learning: Existing few-shot methods belong to one of three categories: generative approaches [Zhang2018, Dwivedi2019], embedding-based meta-learners [Snell2017, Vinyals2016, Ren2018] and adaptation-based meta-learners [Finn2017, Rusu2019, Sun, Antoniou2018, Finn2018, Sun2019, Nichol, Bertinetto2018, Requeima2019, Tseng2020]. Adaptation-based meta-learners produce initial models which can be fine-tuned quickly to unseen tasks, using limited labelled data. One widely-used method is Model Agnostic Meta-Learning (MAML) [Finn2017], where repeated specialisation on tasks drawn from the training set encourages the ability to adapt to new tasks with little data. Later variations on this approach include promoting training stability [Antoniou2018] and improving training speed and performance on more realistic problems with deeper architectures [Nichol]. Some works have learned alternative training curricula [Sun] or modified the task specialisation [Rusu2019, Bertinetto2018]. Others have learned alternative fine-tuning mechanisms [Requeima2019, Tseng2020] or pseudo-random labels [Sun2019] to help with adaptation to unseen tasks. These adaptation-based meta-learners contrast with embedding-based meta-learners, which find a space where the few-shot task can be embedded. A classifier is then constructed in this space, e.g. by comparing distances target samples to seen source samples [Vinyals2016].
None of the above works have exploited context available from metadata of the training data. Further, they have been evaluated on datasets where additional context knowledge is not available [Oreshkin2018, Dwivedi2019], where context is shared between the training and test split [Lake2015, Vinyals2016] or combinations of the above [triantafillou2019metadataset, Tseng2020]. We select adaptation-based meta-learning as the most suitable candidate for few-shot tasks with context. This is because there is likely to be insufficient target data for utilising generative approaches, and target samples may not embed well in the space constructed by embedding-based meta-learners which have utilised context during training.
Domain Adaptation/Generalisation: Different from domains, contexts are additional labels present within the same dataset, can be continuous and one sample could belong to multiple contexts. Further, there are no assumptions that there is a shared label or task space between different contexts. Nevertheless, we can still take inspiration from domain adaptation and generalisation works as the techniques for domain-generalisaton are relevant for context-agnostic learning.
Domain adaptation techniques aim to align source and target data. Some works use domain statistics to apply transformations to the feature space [Busto2017], minimise alignment errors [Haeusser2017], generate synthetic target data [Hoffman2018, Huang2018] or learn from multiple domains concurrently [Rebuffi2017, Perrett2019, Li2019]. Adversarial domain classifiers have also been used to adapt a single [Ganin2015, Zhang2019, Kang2018] and multiple [Ros2019] source domains to a target domain. The disadvantage of all these approaches is that sufficient target data is required, making them unsuitable for few-shot learning. Domain generalisation works aim to find representations agnostic to the dataset a sample is from. Approaches include regularisation [Balaji2018], episodic training [Li2019a, Dou2019a] and adversarial learning [Li2018]. In this paper, we build on adversarial training, as in [Ganin2015, Zhang2019, Ros2019, Kang2018, Li2018] for context-based meta-learning approach for few-shot learning.
3 Proposed Method
We start Section 3.1 by formulating the problem, and explaining how it differs from commonly-tackled meta-learning problems. In Section 3.2, we detail our proposal to introduce context-agnostic training during meta-learning.
3.1 Problem Formulation
Commonalities to other meta-learning approaches: The input to our method is labelled training data for a number of tasks, as well as limited (i.e. few-shot) labelled data for target tasks. Adaptation-based meta-learning is distinct from other learning approaches in that the trained model is not directly used for inference. Instead, it is optimised for fine-tuning to a target task. These approaches have two stages: (1) the meta-learning stage - generalisable weights across tasks are learnt, suitable for fine-tuning, and (2) the fine-tuning to target stage - weights from the meta-learning stage are updated given a limited amount of labelled data from the target task. This fine-tuned model is then used for inference on test data on the target task. Throughout this section, we will focus on stage (1), i.e. the meta-learning stage, as this is where our contribution lies.
Our novelty: We consider problems where the target task is unseen, and does not share context labels with the training data. We assume each training sample has both a task label and a context label. The context labels are purely auxiliary - they are not ever the primary label the main network is attempting to predict. We utilise context labels to achieve context-agnostic meta-learning using tasks drawn from the training set and argue that incorporating context-agnosticism into the meta-learning process provides better generalisation. This is particularly important when the set of context labels in the training data is small, increasing the potential discrepancy between the target source and target tasks.
3.2 Context-Agnostic Meta-Learning
Our contribution is applicable to adaptation-based meta-learning algorithms which are trained in an episodic manner. This means they use an inner update loop to handle the fine-tuning of network weights on a single task, and an outer update loop which incorporates changes made by the inner loop into a set of primary network weights [Finn2017, Rusu2019, Antoniou2018, Finn2018, Nichol]. To recap, none of these algorithms exploit context knowledge, and although they differ in the way they specialise to a single task in the inner loop, they all share a common objective:
(1) |
where are the network weights, is a randomly sampled task and is the loss for this task. denotes an update which is applied times, using data from task . Algorithm 1 shows (in black) the method employed by [Finn2017, Antoniou2018, Nichol], including the inner and outer loop structure common to this class of meta-learning technique. They differ in the way they calculate and backpropogate in the inner specialisation loop (where different order gradients are applied, and various other training tricks are used). This step appears in Algorithm 1 L7-10 and Fig. 2(c). However, they can all be modified to become context-agnostic in the same way - this is our main contribution (shown in blue in the algorithm), which we discuss next.
To achieve context-agnostic meta-learning, we propose to train a context-adversarial network alongside the task-specialised network. This provides a second objective to our meta-learning. We update the meta-learning objective from Eq. 1 to include this context-adversarial objective, to become
(2) |
where is a context loss, given by an associated context network with weights , which acts on the output of the network with weights . is the adversarial update which is performed times. The relative contribution of is controlled by . Because and both operate on , they are linked and should be optimised jointly. Equation 2 can thus be decomposed into two optimisations:
(3) | |||||
(4) |
We can observe the adversarial nature of in Eqs. 3 and 4, where, while attempts to minimise , attempts to extract features which are context-agnostic (i.e. maximise ). To optimise, we proceed with two steps (in practice we take copies of network weights). The first is to update the context predictor using the gradient . This is performed times, which we write as
(5) |
A higher means the adversarial network trains quicker, when balanced against to ensure and learn together in an efficient manner. The second step is to update the primary network with weights with the gradient
(6) |
The first term corresponds to the contribution of the task-specific inner loop. The method in [Nichol] reduces this quantity to , where is the learning rate. is a weighting factor for the contribution from the adversarial classifier, which can analogously be reduced to
. It can be incorporated by backpropagating the loss from
through a gradient reversal layer (GRL) to . As well as performing Eqs. 5 and 6, we also perform each iteration of the adversarial updates with respect to and concurrently.In practice, the process above can be simplified by taking two copies of the primary weights at the start of the process as shown in Algorithm 1, which matches the illustration in Fig. 2. At each outer iteration, we first choose a task (Algorithm 1 Line 5) and make two copies of the primary weights (L6): (weights used for the task-specialisation inner loop) and (weights used for the context-adversarial inner loop). The task specialisation loop is then run on (L7-10). Next, the adversarial loop is run on and (L12-17). The primary weights are updated using weighted contributions from task-specialisation () and context-generalisation () (L18).
The optimiser state and weights for the adversarial network with weights are persistent between outer loop iterations so can learn context as training progresses. This contrasts with the optimisers acting on the and , which are reset every outer loop iteration for the next randomly selected task to encourage the initiailisation to be suitable for fast adaptation to a novel task.
Note that we use a separate copy of the primary weights which we attach adversarial network to, rather than just use the single primary network with weights . Doing this ensures that the source and target tasks (few-shot classification by fine-tuning the initialisation) are as similar as possible, which means the initialisation is well suited to fine-tuning to the target task.
Following standard meta-learning approaches, the weight initialisations can be fine-tuned to an unseen target task. After fine-tuning on the few-shot labelled data from target tasks, this updated model can be used for inference on unlabelled data from these target tasks (see Fig. 2(f)). No context labels are required for the target, as the model is trained to be context-agnostic. Our method is thus suitable for fine-tuning to the target task when new context is encountered, as well as when contexts overlap.
Next, we apply our framework to three meta-learning algorithms, and we explore two problems for evaluation. Recall that our approach assumes both task and context labels are available during training. In both our cases studies, we select datasets where context is available in the metadata. Additionally, in the appendix we investigate datasets where context is not readily available. We discover high-level categories for image classification, and use these as context labels. We show that context-agnostic meta-learning can also be exploited, evaluating our approach on Mini-ImageNet.
4 Case Study 1: Character Classification
Problem Definition. Our first case study uses the few-shot image classification benchmark - Omniglot [Lake2015]. We consider the task as a 5- or 20-way character classification problem, and the context as which alphabet a character is from. We follow the standard experimental setup for this task, introduced in [Vinyals2016], which consists of 1- and 5-shot learning on sets of 5 and 20 characters (5- or 20-way) from 50 alphabets. However, we make one major and important change. Recall, we have suggested that existing meta-learning techniques are not designed to handle context within the training set, or context-discrepancy between training and target. The protocol from [Vinyals2016] uses a character-based train/target split, where an alphabet can contribute characters to both train and target tasks (Fig. 1(a)). Instead, we eliminate this overlap by ensuring that the tasks/characters are from different alphabets, i.e. an alphabet-based split (Fig. 1(b)).
Evaluation and Baselines. We evaluate the proposed context-agnostic framework using three inner/outer loop meta-learners: MAML++ [Antoniou2018], MAML [Finn2017] and REPTILE [Nichol]. Note that other adaptation-based meta-learning methods could also be used by substituting in their specific inner-specialisation loops [Rusu2019, Finn2018]. Unmodified versions are used as baselines, and are compared against versions which are modified with our proposed context agnostic (CA) component. We accordingly refer to our modified algorithms as CA-MAML++, CA-MAML and CA-REPTILE. We report results without transduction, that is batch normalisation statistics are not calculated from the entire target set in advance of individual sample classification. This is more representative of a practical application. As in [Vinyals2016], the metric is top-1 character classification accuracy.
We run experiments on the full dataset, and also on a reduced number of alphabets. With 5 alphabets, for example, characters from 4 alphabets are used for training, and a few-shot task is chosen from the 5th alphabet only. As the number of alphabets in training decreases, a larger context gap would be expected between training and target. We report averages over 10 random train/target splits, and keep these splits consistent between experiments on the same number of alphabets, for a fair comparison.
Implementation Details.
The widely-used architecture, optimiser and hyperparameters introduced in
[Vinyals2016], are used. We implement the adversarial context predictor in the proposed context-agnostic methods as a single layer which takes the penultimate features layer (256D) as input with a cross-entropy loss applied to the output, predicting the alphabet. Context label randomisation is used in the adversarial classifier, where 20% of the context labels are changed. This stops the context adversarial loss tending to zero too quickly (similar to label smoothing [Salimans2016]). We use (Eq. 2) for all Omniglot experiments. The context-agnostic component adds around 20% to the training time for all methods.Results. Table 1 shows the results of the proposed framework applied to [Antoniou2018, Finn2017, Nichol] on 5-50 alphabets, using the alphabet-based split shown in Fig. 1(b). We report results per method, to show our proposed context-agnostic component improves on average across all methods, tasks and numbers of alphabets. 85% of individual method/task/alphabet combinations show an improvement, with a further 10% being comparable (within 1% accuracy). Overall, the proposed framework gives an average performance increase of 4.3%. This improvement is most pronounced for smaller numbers of alphabets (e.g. average improvements of 6.2%, 4.9% and 4.2% for 5 and 10 alphabets for [Nichol, Finn2017, Antoniou2018] respectively). This trend is shown in Fig. 3(a), and supports our earlier hypothesis that the inclusion of a context-agnostic component is most beneficial when the context overlap between the train and target data is smaller. Fig. 3(b) shows the improvement for each XS YW task, averaged over the number of alphabets. Larger improvements are observed for all methods on the 1-shot versions of 5- and 20-way tasks, with [Nichol] improving the most on 1S 5W and [Finn2017, Antoniou2018] improving the most on 1S 20W.
Number of Alphabets | ||||||
---|---|---|---|---|---|---|
Task | Method | 5 | 10 | 15 | 20 | 50 |
1S 20W | MAML++ [Antoniou2018] | 58.7 | 57.2 | 64.7 | 85.6 | 89.6 |
CA-MAML++ | 72.3 | 67.6 | 82.4 | 84.8 | 90.9 | |
MAML [Finn2017] | 61.4 | 78.2 | 81.5 | 83.7 | 87.5 | |
CA-MAML | 69.8 | 82.8 | 82.1 | 89.8 | 93.8 | |
REPTILE [Nichol] | 11.9 | 18.1 | 37.6 | 51.6 | 64.9 | |
CA-REPTILE | 20.7 | 21.8 | 39.5 | 55.5 | 66.5 | |
1S 5W | MAML++ [Antoniou2018] | 97.4 | 96.2 | 94.9 | 93.4 | 93.7 |
CA-MAML++ | 98.1 | 97.1 | 90.1 | 95.8 | 97.1 | |
MAML [Finn2017] | 86.1 | 87.0 | 96.1 | 94.4 | 90.5 | |
CA-MAML | 94.5 | 91.3 | 94.7 | 96.0 | 96.2 | |
REPTILE [Nichol] | 52.2 | 68.8 | 79.4 | 75.5 | 77.5 | |
CA-REPTILE | 62.2 | 76.9 | 83.4 | 83.2 | 85.5 | |
5S 20W | MAML++ [Antoniou2018] | 81.0 | 84.1 | 92.4 | 93.5 | 95.8 |
CA-MAML++ | 84.8 | 90.8 | 96.0 | 94.5 | 96.3 | |
MAML [Finn2017] | 81.7 | 83.8 | 84.0 | 91.2 | 89.0 | |
CA-MAML | 86.0 | 91.8 | 92.9 | 93.1 | 86.9 | |
REPTILE [Nichol] | 58.4 | 68.1 | 76.7 | 76.0 | 78.0 | |
CA-REPTILE | 61.1 | 73.7 | 78.3 | 75.8 | 81.6 | |
5S 5W | MAML++ [Antoniou2018] | 99.4 | 99.3 | 98.7 | 97.0 | 96.8 |
CA-MAML++ | 99.3 | 98.6 | 98.5 | 99.4 | 96.9 | |
MAML [Finn2017] | 96.6 | 95.8 | 97.2 | 97.9 | 98.9 | |
CA-MAML | 97.8 | 98.5 | 97.6 | 98.6 | 99.1 | |
REPTILE [Nichol] | 85.2 | 85.6 | 93.2 | 88.5 | 89.4 | |
CA-REPTILE | 88.3 | 94.4 | 92.4 | 91.6 | 92.9 |
For the following ablation studies, we use [Nichol] as our base meta-learner as it is the least computationally expensive. Based on preliminary studies, we believe the behaviour is consistent, and the conclusions stand, for the other methods. In the results above, we used for the contribution of our adversarial component (Eq. 1). Next, we provide results on how varying can affect the model’s performance. For this, we use the 5-shot/5-way, 10 alphabet task. Fig. 4 shows training progress with . We can see that a high weighting () causes a drop in training accuracy around iteration 40K, as the optimisation prioritises becoming context-agnostic over the ability to specialise to a task. However, the figure generally shows reasonable robustness to the choice of .
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Next, we investigate the differences between character-based and alphabet-based training/target splits (visualised in Fig. 1). Fig. 5 shows the effects of context-agnosticism when evaluating on character-based splits and alphabet-based splits. Fig. 5(a) uses 50 alphabets for comparison, and Fig. 5(b) uses 10 alphabets. While both approaches are comparable on character-based splits (blue vs red), we show a clear improvement in using our context-agnostic meta-learning approach when tested on alphabet-based splits (yellow vs green). This is a sterner test due to the training and target sets being made up from data with different contextual properties. The context-agnostic version is significantly better for all cases and both alphabet sizes.
Finally, as previous approaches only evaluate on the easier character-based split for Omniglot, using all 50 alphabets, we provide comparative results to published works on this setup. We list reported results from [Finn2017, Antoniou2018, Nichol] as well as our replications to ensure a direct comparison (the same codebase and splits can be used with and without the context-agnostic component). For this setup, we use the same data augmentation as [Finn2017, Antoniou2018, Nichol]. Results are given in Table 2, which confirms that context-agnostic versions of the base methods achieve comparable performance, despite there being shared context between source and target.
In summary, this section presented experiments on the Omniglot character classification dataset. We show that, on average, our proposed context-agnostic approach gives performance improvements across all methods and tasks, particularly for smaller alphabet sizes, which introduce a bigger context gap between training and target.
Method | 5S 5W | 1S 5W | 5S 20W | 1S 20W |
---|---|---|---|---|
MAML++ [Antoniou2018]* | 99.9 | 99.4 | 99.3 | 97.7 |
MAML++ [Antoniou2018] | 99.9 | 99.5 | 98.7 | 95.4 |
CA-MAML++ | 99.8 | 99.5 | 98.8 | 95.6 |
MAML [Finn2017]* | 99.8 | 98.6 | 98.9 | 95.8 |
MAML [Finn2017] | 99.8 | 99.3 | 97.0 | 92.3 |
CA-MAML | 99.8 | 99.3 | 97.2 | 94.8 |
REPTILE [Nichol]* | 98.9 | 95.4 | 96.7 | 88.1 |
REPTILE [Nichol] | 98.9 | 97.3 | 96.4 | 87.3 |
CA-REPTILE | 98.6 | 97.6 | 95.9 | 87.8 |
5 Case Study 2: Calorie Estimation from Video
Problem Definition. In this second example, we use the problem definition from [Tao2018]
, where the task is to estimate energy expenditure for an input video sequence of an indiviual carrying out a variety of actions. The target task is to estimate the calorimeter reading for seen, as well as unseen, actions. Importantly, the individual captured forms the context. Alternative context labels could include, for example, age or Body Mass Index (BMI). Our objective is thus to perform meta-learning to generalise across actions, as well as being individual-agnostic, for calorie prediction of a new individual (our prime context-agnostic focus). We use silhouette footage and calorimeter readings from 10 participants performing a number of daily living tasks as derived from the SPHERE Calorie dataset of
[Tao]. It presents a good practical test of a meta-learning technique due to its complexity and size (1,000,000 frames). Using a relatively small amount of data to fine-tune to target is appropriate because collecting data from individuals using a calorimeter is expensive and cumbersome.Evaluation and Baselines. Ten-fold leave-one-person-out cross-validation is used for evaluation. We report results using MSE across all videos for each subject. For fine-tuning to target, we use labelled calorie measurements from the first 32 seconds (i.e. the first 60 video samples, where each sample is 30 frames subsampled at 1fps) of the target subject. Evaluation is then performed using the remaining data from the target subject. We compare the following methods, using cross-fold, leave-one-person-out validation:
-
Metabolic Equivalent (MET) from [Tao]. This offers a baseline of calorie estimation through a look-up table of actions and their duration. This has been used as a baseline on this dataset previously.
-
Method from Tao et al. [Tao2018] that utilises IMU and depth information not used by our method.
-
Pre-train - standard training process, trained on 9 subjects and tested on target subject without fine-tuning.
-
Pre-train/fine-tune - standard training process on 9 subjects and fine-tuned on the target subject.
-
REPTILE - meta-learning from [Nichol] on 9 subjects and fine-tuned on the target subject.
-
CA-REPTILE - our proposed context-agnostic meta-learning approach on 9 subjects and fine-tuned on the target subject.
Note that we chose to use [Nichol] as the baseline few-shot method because it is less computationally expensive (important when scaling up the few shot-problem to video) than [Finn2017, Antoniou2018], as discussed in Section 2.
Implementation Details. Images are resized to 224x224, and fed to a ResNet-18 architecture [He2016]. No previous works have addressed this individual-agnostic personalisation problem, although it has been shown that around 30s of information is required prior to each energy expenditure prediction [Tao], so we sample the data at 1fps and use the ResNet CNN’s output from the penultimate layer as input to a Temporal Convolutional Network (TCN) [Bai2018] for temporal reasoning. Our model is trained end-to-end using Adam [Kingma2015] and contains 11.2M parameters. We use (Eq. 1) and (Eq. 2) for all Calorie experiments. A lower value of
is required than for Omniglot, as context information is easier for the adversarial network to learn (i.e. people are easier for it to distinguish than which alphabet a character is from. MSE is used as the regression loss function. Augmentation during training consists of random crops and random rotations up to 30
. The same architecture is used for all baselines (except MET and [Tao2018]), making results directly comparable.Method | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | Avg |
---|---|---|---|---|---|---|---|---|---|---|---|
MET Lookup [Tao] | - | - | - | - | - | - | - | - | - | - | 2.25 |
Tao et al. [Tao2018] | - | - | - | - | - | - | - | - | - | - | 1.69 |
Pre-train only | 1.21 | 0.89 | 0.88 | 1.86 | 1.24 | 2.46 | 7.50 | 0.89 | 1.25 | 3.11 | 2.13 |
Pre-train/fine-tune | 0.58 | 1.64 | 0.75 | 0.53 | 1.13 | 4.26 | 5.83 | 1.29 | 1.41 | 3.53 | 2.10 |
REPTILE [Nichol] | 0.48 | 1.65 | 0.52 | 0.90 | 2.12 | 3.28 | 6.48 | 1.26 | 0.83 | 2.58 | 2.01 |
CA-REPTILE | 0.39 | 1.11 | 0.46 | 0.48 | 0.87 | 2.68 | 3.75 | 1.07 | 0.87 | 2.32 | 1.40 |
Results. Table 3 compares the various methods. The context-agnostic meta-learning method obtains a 35% reduction in MSE over the pre-training only, a 33% reduction over the pre-train/fine-tune model, and a 30% improvement over the non context-agnostic version. Fig. 6 shows qualitative silhouette sequences with calorimeter readings as groundtruth, which are to compared to predictions from our method and baselines. Results demonstrate that the context-agnostic version tracks the ground truth curve better than other methods from participants with low and high energy expenditure variability.
![]() |
![]() |
![]() |
![]() |
We investigate what effect the adversarial architecture choice and placement within the network has on energy estimation results, by comparing the following:
-
Adversarial TCN. The adversarial classifier is a TCN, which takes inputs from the penultimate layers of the ResNet-18 CNNs. This uses temporally aggregated information for context prediction. The adversarial TCN has the same architecture as the calorie prediction TCN, but instead predicts the context/individual.
-
Task TCN Adversarial classifier. This a single layer that takes its input as the output of the TCN.
-
ResNet Adversarial classifier. Adversarial classifier taking inputs from ResNet. Again, a single layer is used for the adversarial classifier, taking its input as the penultimate layers of the ResNet-18 CNNs. This just focuses on making the frame-level features context-agnostic, without using temporal knowledge. This is the default architecture used in our previous results.
Table 4 shows these results, where the adversarial classifier connected to the final layer of the ResNet-18 CNN performs best. Context (i.e. person) classification on this dataset is relatively easy as there are only 9 training subjects, and a less complicated adversarial architecture allows the gradient to flow better into the features. This also shows, as might be expected, that individual’s identity could be estimated from a single image, without the need for temporal information.
In summary, this section presented results on video-based few-shot regression of calorie estimation from video. We demonstrated that context-agnostic meta-learning delivers a reduction in MSE compared to standard pre-training/fine-tuning of 33%, and 30% compared to non-context-aware meta-learning. On all case studies (classification and regression) the improvement from utilising context-agnostic meta-learning is clearly evident.
Adversarial-Classifier Options | MSE |
---|---|
Adv. TCN | 1.51 |
TCN Adv. Classifier | 1.46 |
ResNet Adv. Classifier | 1.40 |
6 Conclusion
In this paper, we proposed context-agnostic meta-learning that learns a network initialisation which can be fine-tuned quickly to new few-shot target problems. An adversarial context network acts on the initialisation in the meta-learning stage, along with task-specialised weights, to learn context-agnostic features capable of adapting to tasks which do not share context with the training set. This overcomes a significant drawback with current few-shot meta-learning approaches, that do not exploit context which is often readily available.
The framework is evaluated on the Omniglot few-shot image classification dataset, where it demonstrates significant improvements when exploiting context information. We also evaluate on a few-shot regression problem, for calorie estimation from video, showing our proposed context-adversarial meta-learning delivers improvements of 30%. This shows the importance of incorporating context into few-shot methods, and we will pursue other few-shot problems and methods with context in mind as future work.
References
Appendix 0.A Appendix
In the main paper, we evaluate our Context-Agnostic Meta-Learning approach on two case studies. These are selected where context labels are readily available from dataset metadata: alphabet labels in the Omniglot dataset and participant ID in the Calorie dataset. Results show that learning a context-agnostic initialisation can improve few-shot performance when novel context is seen in the test set.
In some cases context labels for the training set are not available from metadata [Oreshkin2018, Dwivedi2019, triantafillou2019metadataset, Tseng2020]. The following supplementary case study gives an example of how useful context labels can be assigned when these are not available with the dataset. We demonstrate that our Context-Agnostic initialisation similarly improves performance for these artificial context labels. As an example, we consider Mini-ImageNet [Vinyals2016] due to its widespread use for few-shot classification.
Supplementary Case Study: Mini-ImageNet Classification
Problem Definition. We use the experimental setup introduced in [Vinyals2016], where the task is a 1- or 5-shot 5-way object recognition problem. The dataset in its existing form has two issues which prevent us analysing the effect of our context-agnostic method: there are no context labels, and there is a large overlap between the splits (e.g. 3 breeds of dog in test, 12 in train). We address this by grouping each of the dataset’s 100 classes to one of 12 superclasses and using these as context labels. The superclasses manually assigned so that similar classes are grouped together. These superclasses are: clothes, humans, instruments, objects, buildings, food, vehicles, birds, mammals, fish, insects and dogs. We then ensure that the superclasses used for training and testing are distinct.
Evaluation, Baselines and Implementation. Similar to Section 4 in the main paper (the Omniglot case study), we evaluate using MAML++ [Antoniou2018] and MAML [Finn2017]. Unmodified versions are used as baselines, and are compared against versions using our proposed context-agnostic (CA) component. Transduction is not used, and the metric is top-1 image classification accuracy.
The same base architecture, hyperparemters etc. as in [Antoniou2018] are used. The same context adversarial architecture and label smoothing as in Section 4 are used. We use (the task specialisation inner loop count - Eq. 1 in the main paper) and (the context adversarial inner loop count - Eq. 2 in the main paper). Results are given for the original Mini-ImageNet splits and our distinct train/test splits with novel context in testing. Scores are the average over 12-fold cross validation, so each superclass takes a turn at being the test set (i.e. leave one out cross validation on the superclasses).
Results. Table 5 shows the results on the original train/test split and the new splits with no shared context. The main result here is that, when there is no shared context between train and test data, our context-agnostic component improves over [Finn2017] and [Antoniou2018] by an average 3.3% on the most difficult 1S 5W task. An average 2.2% improvement is also seen on the easier 5S 5W task, whilst performance is maintained on the original split.
Note that few shot classification on Mini-ImageNet is significantly harder when there is no shared context between training and testing data. For example on 1S 5W and MAML++ performance drops from 52.0% to 40.1%. Across all tasks, the results for [Finn2017] and [Antoniou2018] are on average 8.7% worse on the train/test split with no shared context compared to the original splits with shared context.
Original split | Distinct split | |||
---|---|---|---|---|
Method | 1S 5W | 5S 5W | 1S 5W | 5S 5W |
MAML++ [Antoniou2018] | 52.0 | 68.1 | 40.1 | 60.1 |
CA-MAML++ | 51.8 | 68.1 | 44.4 | 61.5 |
MAML [Finn2017] | 48.3 | 64.3 | 41.1 | 56.5 |
CA-MAML | 48.3 | 64.2 | 43.3 | 59.5 |
Conclusion. The results presented here on Mini-ImageNet show that train/test splits with little overlap in superclasses produce a harder test for few-shot learning. We demonstrate that introducing artificial context labels based on class similarity can be used alongside our context-agnostic meta-learning, producing better intialisations when there is not shared context between train and test splits. Performance is also maintained when there is shared context.
This is a similar conclusion to the two case studies in the main paper, where the same outcomes are demonstrated using context labels taken directly from meta-data.
Comments
There are no comments yet.