Self-Supervision Meta-Learning for One-Shot Unsupervised Cross-Domain Detection

Deep detection models have largely demonstrated to be extremely powerful in controlled settings, but appear brittle and fail when applied off-the-shelf on unseen domains. All the adaptive approaches developed to amend this issue access a sizable amount of target samples at training time, a strategy not suitable when the target is unknown and its data are not available in advance. Consider for instance the task of monitoring image feeds from social media: as every image is uploaded by a different user it belongs to a different target domain that is impossible to foresee during training. Our work addresses this setting, presenting an object detection algorithm able to perform unsupervised adaptation across domains by using only one target sample, seen at test time. We introduce a multi-task architecture that one-shot adapts to any incoming sample by iteratively solving a self-supervised task on it. We further exploit meta-learning to simulate single-sample cross domain learning episodes and better align to the test condition. Moreover, a cross-task pseudo-labeling procedure allows to focus on the image foreground and enhances the adaptation process. A thorough benchmark analysis against the most recent cross-domain detection methods and a detailed ablation study show the advantage of our approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 10

page 11

05/23/2020

One-Shot Unsupervised Cross-Domain Detection

Despite impressive progress in object detection over the last years, it ...
11/15/2019

Curriculum Self-Paced Learning for Cross-Domain Object Detection

Training (source) domain bias affects state-of-the-art object detectors,...
07/28/2021

SimROD: A Simple Adaptation Method for Robust Object Detection

This paper presents a Simple and effective unsupervised adaptation metho...
12/21/2017

Maximally Distant Cross Domain Generators for Estimating Per-Sample Error

While in supervised learning, the validation error is an unbiased estima...
07/26/2021

Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data

A recent study finds that existing few-shot learning methods, trained on...
09/08/2019

Cross Domain Image Matching in Presence of Outliers

Cross domain image matching between image collections from different sou...
11/13/2020

Cross-Domain Learning for Classifying Propaganda in Online Contents

As news and social media exhibit an increasing amount of manipulative po...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Despite impressive progress in object detection over the last years, it is still an open challenge to reliably localize and recognize objects across visual domains. Indeed most of the existing detection models rely on deep representative features learned from large amount of labeled training data, that besides being costly to annotate, are usually drawn from a specific distribution (source). Thus, when the learned models are applied on images sampled from a different (target) domain they suffer from a severe performance degradation. This hinders the deployment of detection models in real-world conditions where often it is not possible to anticipate the application domain or access it in advance for data acquisition. Consider for example the social media feed scenario depicted in Figure 3, where there is an incoming stream of images from various social media and the detector is asked to look for instances of the class bicycle. The images come continuously, but they are produced by different users that share them on different social platforms. Hence, even though they might contain the same object, each of them has been acquired by a different person, in a different context, under different viewpoints and illuminations. In other words, each image comes from a different visual domain, distinct from the visual domain where the detector has been trained. This poses two key challenges to current cross-domain detectors: (1) to adapt to the target data, these algorithms need first to gather feeds, and only after enough target data has been collected they can learn to adapt and start performing on the incoming images; (2) even if the algorithms have learned to adapt on target images from the feed up to time , there is no guarantee that the images that will arrive from time will come from the same target domain.

This is the scenario we address. We focus on cross-domain detection when only one target sample is available for adaptation, without any form of supervision. We propose an object detection method able to adapt from one target image, hence suitable for the social media scenario described above. Specifically, we build a multi-task deep architecture that adapts across domains by leveraging over a self-supervised pretext task. After an initial pretraining phase in which it is trained together with the main supervised objective on the source data, the self-supervised module proceeds on the single target sample, finetuning the network and customizing the features for the final detection prediction. The auxiliary knowledge is further guided by a cross-task pseudo-labeling that injects the locality specific of object detection into self-supervised learning. Moreover, we show how self-supervision can be even more effective when used as the inner base objective of a meta-learning algorithm whose outer goal is training a domain robust detection model. By re-formulating the pretraining process as a bilevel optimization we simulate several single-sample cross-domain learning episodes and better align to the final deployment condition with a further advantage in learning speed and accuracy.

To summarize, this paper extends our previous work [oshot_eccv20] and presents the following contributions. (1) We introduce the One-Shot Unsupervised Cross-Domain Detection setting, a cross-domain detection scenario where the target domain changes from sample to sample, hence adaptation can be learned only from one image. This scenario is especially relevant for monitoring social media image feeds. (2) We propose OSHOT, the first cross-domain object detector able to perform One-SHOT unsupervised adaptation. Our approach leverages over self-supervised one-shot learning guided by a cross-task pseudo-labeling procedure, embedded into a multi-task architecture. (3) We present a novel meta-learning formulation to combine the main supervised detection task with the self-supervised auxiliary objective and effectively push the model to produce good results after a few adaptation iterations. We indicate it as FULL-OSHOT: we discuss its effectiveness with thorough ablation experiments to assess the role of all its inner components and provide an extensive error analysis. (4) We present a tailored experimental setup for studying one-shot unsupervised cross-domain detection, designed on three existing databases plus a new test set collected from social media feed. We compare against recent adaptive detection algorithms [Saito_2019_CVPR, diversify&match_Kim_2019_CVPR, xuCVPR2020]

and one-shot style-transfer based unsupervised learning

[Cohen_2019_ICCV], achieving the state-of-the-art. (5) We further evaluate our multi-task and meta-learning based methods also for cross-domain multi-target object classification, confirming the broad applicability and effectiveness of the proposed approach.

Fig. 1: Each social media image comes from a different domain. Existing Cross-Domain Detection algorithms (e.g. [diversify&match_Kim_2019_CVPR] in the left gray box) struggle to adapt in this setting. OSHOT (right) is able to adapt across domains from one single target image, thanks to the combined use of self-supervision, meta-learning and pseudo-labeling.

2 Related Work

Object Detection

Many successful object detection approaches have been developed during the past years, starting from the original sliding window methods based on handcrafted features, till the most recent deep-learning empowered solutions. Modern detectors can be divided into

one-stage and two-stage techniques. In the former, classification and bounding box prediction is performed on the convolution feature map either solving a regression problem on grid cells [redmon2016you], or exploiting anchor boxes at different scales and aspect ratios [liu2016ssd]

. In the latter, an initial stage deals with the region proposal process and is followed by a refinement stage that adjusts the coarse region localization and classifies the box content. Existing variants of this strategy differ mainly in the region proposal algorithm

[girshick2014rich, girshick2015fast, ren2015faster]. Regardless of the specific implementation, the detector robustness across visual domains remains a major issue.

Cross-Domain Detection When training and test data are drawn from two different distributions a model learned on the first is doomed to fail on the second. Unsupervised domain adaptation methods attempt to close the domain gap between the annotated source on which learning is performed, and the target samples on which the model is deployed. Most of the literature has focused on object classification with solutions based on feature alignment [Long:2015, LongZ0J17, dcoral, hdivergence] or adversarial approaches [Ganin:DANN:JMLR16, Hoffman:Adda:CVPR17]. GAN-based methods allow to directly update the visual style of the annotated source data and reduce the domain shift directly at pixel level [russo17sbadagan, cycada]. Only in the last three years adaptive detection methods have been developed considering three main components: (i) including multiple and increasingly more accurate feature alignment modules at different internal stages, (ii) adding a preliminary pixel-level adaptation and (iii) pseudo-labeling. The last one is also known as self-training and consists in using the output of the source model detector as coarse annotation on the target.

The importance of considering both global and local domain adaptation, together with a consistency regularizer to bridge the two, was first highlighted in [Chen_2018_CVPR]. The Strong-Weak (SW) method of [Saito_2019_CVPR] improves over the previous one pointing out the need of a better balanced alignment with strong global and weak local adaptation. It was also further extended by [Xie_2019_ICCV_Workshops], where the adaptive steps are multiplied at different depth in the network. The most recent SW-ICR-CCR method [xuCVPR2020] further includes an image-level multi-label classifier and a module imposing consistency between the image-level and instance-level predictions. The first allows to obtain crucial regions corresponding to categorical information, the second evaluates the consistency between the image-level and instance-level predictions and works as a regularization factor that helps to point out hard aligned instances in the target domain.

By generating new source images that look like those of the target, the Domain-Transfer (DT, [inoue2018cross]) method was the first to adopt pixel-level adaptation for object detection and combine it with pseudo-labeling. More recently the Div-Match approach [diversify&match_Kim_2019_CVPR] re-elaborated the idea of domain randomization [Tobin2017DomainRF]: multiple CycleGAN [CycleGAN2017] applications with different constraints produce three extra source variants with which the target can be aligned at different extent through an adversarial multi-domain discriminator. A weak self-training procedure (WST) to reduce false negatives is combined with adversarial background score regularization (BSR) in [kim2019selftraining]. Finally, [robust_Khodabandeh_2019_ICCV] adopted pseudo-labeling and an approach to deal with noisy annotations.

Adaptive Learning on a Budget There is a wide literature on learning from a limited amount of data, both for classification and detection. However, in case of domain shift, learning on a target budget becomes extremely challenging. Indeed, the standard assumption for adaptive learning is that a large amount of unsupervised target samples are available at training time, so that a source model can capture the target domain style from them and adapt to it.

Only few attempts have been done to reduce the target cardinality. In [fewshotNIPS17] the considered setting is that of few-shot supervised domain adaptation: only a few target samples are available but they are fully labeled. In [oneshotNIPS2018, Cohen_2019_ICCV] the focus is on one-shot unsupervised style transfer

with a large source dataset and a single unsupervised target image. These works propose time-costly autoencoder-based methods to generate a version of the target image that maintains its content, but visually resembles the source in its global appearance. Thus the goal is image generation with no discriminative purpose. A related setting is that of

online domain adaptation where unsupervised target samples are initially scarce but accumulate in time [Hoffman_CVPR2014, Wulfmeier2017IncrementalAD, mancini2018kitting]. In this case target samples belong to a continuous data stream with smooth domain changing, so the coherence among subsequent samples can be exploited for adaptation.

Self-Supervised Learning Despite not-being manually annotated, unsupervised data is rich of structural information that can be learned by self-supervision, i.e. hiding a subpart of the data information and then trying to recover it. This procedure is generally indicated as pretext task and possible examples are image completion [pathakCVPR16context]

, colorization

[zhang2016colorful, larsson2017colorization], relative position of patches [doersch2015unsupervised, noroozi2016unsupervised], rotation recognition [gidaris2018unsupervised] and many more. Self-supervised learning has been extensively used as an initialization step for scarcely annotated supervised learning settings and very recently [asano20a-critical] has shown with a thorough analysis the potential of self-supervised learning from a single image. Several works have also indicated that self-supervision supports adaptation and generalization when combined with supervised learning in a multi-task framework [jigen, Bucci2019TacklingPD, Xu2019SelfsupervisedDA].

Meta-Learning Standard learning is based on algorithms able to improve their performance over multiple data instances. Meta-learning extends it and refers to the process of improving an algorithm over multiple learning episodes. In practical terms, a base learning model is trained to solve a task as classification or detection on a dataset, while the meta-learning loop updates the base algorithm considering multiple tasks of the same family to accomplish a higher level objective as generalization or increasing learning speed. Meta-learning in the last years has been widely used for few-shot learning with scarce data tasks simulated by randomly drawing samples from the full training set [maml-finn17a, NIPS2017_fewshotproto, NIPS2016_matching, rusu2018metalearning]. A similar strategy has been adopted to create single source tasks involving samples from a multi-source training set and prepare for generalization. Indeed, by using a validation domain that is shifted from the training domain, different kinds of meta-knowledge as losses [featurecritic], regularization functions [metareg2018] and data augmentation [Tseng2020Cross-Domain] can be (meta) learned to maximize the robustness of the learned model.

Our approach for cross-domain detection relates to the scenario of learning on a budget and connects to the few-shot meta-learning literature. Specifically we propose to combine to the main supervised detection model a self-supervised auxiliary objective to perform one-shot unsupervised adaptation. To better align the self-supervised training phase to the single sample test condition, we leverage on meta-learning by simulating multiple unsupervised single-sample cross-domain learning episodes. We are not aware of previous attempts to apply meta-learning on self-supervision, while pushing it to the extreme one-shot unsupervised condition. The designed method comes with an extra advantage: it is source-free, meaning that the test-time adaptation happens without accessing the source data.

3 Method

We introduce the one-shot unsupervised cross-domain detection scenario where our goal is to predict on a single image , with being any target domain not available at training time, starting from annotated samples of the source domain . Here the structured labels describe class identity and bounding box location in each image , and we aim to obtain that precisely detects objects in despite the domain shift.

3.1 Strategy

To pursue the described goal, our strategy is to train the parameters of a detection learning model such that it can be ready to get the maximal performance on a single unsupervised sample from a new domain after few gradient update steps on it. Since we have no ground truth on the target sample, we implement this strategy by learning a representation that exploits inherent data information as that captured by a self-supervised task, and then finetune it on the target sample. Thus, we design our method to include (1) an initial pretraining phase where we extend a standard deep detection model adding an image rotation classifier, and (2) a following adaptation stage where the network features are updated on the single target sample by further optimization of the rotation objective. Source data is not needed during the adaptation phase. Moreover, we exploit pseudo-labeling in a novel cross-task fashion so that the auxiliary task is guided to focus on the object area. A schematic overview of our approach is presented in Figure 2.

Fig. 2:

Visualization of the pretraining (first two boxes from the left) and adaptive phase (last box on the right) of our approach with cross-task pseudo-labeling. During test time adaptation, the target image passes through the network and produces detections. While the class information is not used, the identified boxes are exploited to select object regions from the feature maps of the rotated image. The obtained region-specific feature vectors are finally sent to the rotation classifier. A number of subsequent finetuning iterations allows to adapt the convolutional backbone to the domain represented by the test image.

3.2 Preliminaries

We leverage on Faster R-CNN [ren2015faster] as our base detection model. It is a two-stage detector with three main components: an initial block of convolutional layers, a region proposal network (RPN) and a region-of-interest (ROI) based classifier. The bottom layers transform any input image into its convolutional feature map where

is used to parametrize the feature extraction model. The feature map is then used by RPN to generate candidate object proposals. Finally the ROI-wise classifier predicts the category label from the feature vector obtained using ROI-pooling. The training objective combines the loss of both RPN and ROI, each of them composed by two terms:

(1)

Here is a classification loss to evaluate the object recognition accuracy, while is a regression loss on the box coordinates for better localization. To maintain a simple notation we summarize the role of ROI and RPN with the function parametrized by . Moreover, we use to highlight that RPN deals with a binary classification task to separate foreground and background objects, while ROI deals with the multi-class objective needed to discriminate among foreground object categories. As mentioned above, ROI and RPN are applied in sequence: they both elaborate on the feature maps produced by the convolutional block, and then influence each other in the final optimization of the multi-task (classification, regression) objective function.

3.3 Pretraining via Multi-task and Meta-Learning

As a first step, we extend Faster R-CNN to include image rotation recognition. Formally, to each training image we apply four geometric transformations where indicates the orientation with . In this way we obtain a new set of samples , where we dropped the without loss of generality. We indicate the auxiliary rotation classifier and its parameters respectively with and . Depending on the training procedure we can get different variants of the model.

Multi-task The supervised and the self-supervised tasks can be jointly trained on the whole source data in a standard multi-task fashion. The overall objective of the designed model is:

(2)

where is the cross-entropy loss. In this way, the shared feature map is learned under the synchronous guidance of both the detection and rotation objectives. More specifically, the obtained representation will be covariant with the object location and appearance as well as with the image or object orientation. Indeed we can design in two different ways: it can either be a Fully Connected (FC) layer that naïvely takes as input the feature map produced by the whole (rotated) image , or it can exploit the ground truth location of each object with a subselection of the features only from its bounding box in the original map . The operation includes pooling to rescale the feature dimension before entering the final FC layer. In this last case the network is encouraged to focus only on the object orientation without introducing noisy information from the background and provides better results with respect to the whole image option as we will discuss in Section 4.6. In practical terms, both in the case of image and box rotations, we randomly pick one rotation angle per instance, rather than considering all four of them.

Meta-Learning Multi-task learning is appealing for deep learning regularization and including a self-supervised task has the advantage of waiving any extra data annotation cost. Still, our main interest remains on detection, while rotation recognition should be considered as a secondary and auxiliary task. To manage this role for rotation, and to better fit to the unlabeled one-shot scenario on a new domain that we will face at test time, we re-formulate the problem inspired by meta-learning and building over the bi-level optimization process of MAML [maml-finn17a]. Specifically we propose to meta-train the detection model with the rotation task as its inner base learner. The optimization objective can be written as

(3)

In words, we start by focusing on the rotation recognition task for each source sample that has been augmented in different ways. We consider semantic-preserving augmentations (e.g. gray-scale, color jittering) and perform multiple learning iterations ( gradient-based update steps). The function collects the optimal parameters and related module obtained by this procedure on . The outer meta-learning loop leverages on it to optimize the detection model over all the data variants and prepares for generalization and fine-tuning on a single sample.

Also in this case we have two possible choices to design : either considering the whole feature map, or focusing on the object location. To simulate the deployment setting we neglect the ground truth object location for the inner rotation objective and substitute the with obtained through the cross-task self-training procedure detailed in the following section. We report a pseudo-code implementation of our meta-learning strategy applied on a single sample in Algorithm 1.

3.4 Cross-task self-training

Self-training is a well known paradigm in semi-supervised learning that allows to exploit weak prediction models to annotate unlabeled data which are then integrated in the learning procedure with the obtained pseudo-labels. This approach has been often used also for domain adaptation both in classification and detection models

[kim2019selftraining, inoue2018cross]. We propose here a cross-task variant: instead of reusing the pseudo-labels produced by the source model on the target to update the detector, we exploit them for the self-supervised rotation classifier. In this way we keep the advantage of the self-training initialization, largely reducing the risks of error propagation due to wrong class pseudo-labels.

We start from the model parameters of the pretraining stage and we get the feature maps from all the rotated versions of the sample , , . Only the feature map produced by the original image (i.e. ) is provided as input to the RPN and ROI network components to get the predicted detection . This pseudo-label is composed by the class label and the bounding box location . We discard the first and consider only the second to localize the region containing an object in all the four feature maps, also recalibrating the position to compensate for the orientation of each map. Once passed through this pseudoboxcrop operation, the obtained features are used both for the meta-learning phase on each source sample and for adaptation fine-tuning on every target sample.

3.5 Adaptation

Given the single target image , we finetune the backbone’s parameters by iteratively solving a self-supervised task on it. This allows to adapt the original feature representation both to the content and to the style of the new sample. Specifically, we start from the rotated versions of the provided sample and optimize the rotation classifier through

(4)

This process involves only and , while the RPN and ROI detection components described by remain unchanged. In the following we use to indicate the number of gradient steps (i.e. iterations), with corresponding to the pretraining phase. At the end of the finetuning process, the inner feature model is described by and the detection prediction on is obtained by . Algorithm 2 outlines the adaptation process on a single target sample.

Input: , , , parameters , , , rotator , augmenter
Data: Source image with
1 while still augmentations do
2      
         // copy params
3       while still iterations do
4            
               // rand. rotation
               // pseudoboxcrop
5             minimize self-supervised loss
6                  
7          end while
8         compute the supervised loss
9 end while
minimize the supervised loss
Algorithm 1 Meta-Learning on one source sample
Input: , , , parameters , , , from the pretraining phase, rotator
Data: Target image
  // copy params
1 while still iterations do
2      
         // rand. rotation
         // pseudoboxcrop
3       minimize self-supervised loss
4         
5 end while
6final detection prediction using update parameters
.
Algorithm 2 Adaptive phase on one target sample

3.6 Model Variants and Implementation Details

We built our model over a public implementation of Faster-RCNN [massa2018mrcnn]. Specifically we chose a ResNet-50 [he2016deep]

backbone pre-trained on ImageNet, RPN with 300 top proposals after non-maximum-supression, anchors at three scales (128, 256, 512) and three aspect ratios (1:1, 1:2, 2:1).

In the following, we will differentiate the name of the proposed model, depending on the specific adopted training procedure. We will indicate with OSHOT our basic One-SHOT multi-task pretrained approach, while we will use FULL-OSHOT to refer to the variant based on meta-learning pretraining. We also consider two intermediate cases: Tran-OSHOT extending OSHOT with the data semantic-preserving transformations used in FULL-OSHOT, and Meta-OSHOT that corresponds to FULL-OSHOT without transformations (i.e. ).

For OSHOT we train the base network for 70k iterations using SGD with momentum set at , the initial learning rate is

and decays after 50k iterations. We use a batch size of 1, keep batch normalization layers fixed for both pretraining and adaptation phases and freeze the first 2 blocks of ResNet50. The weight of the rotation task is set to

. FULL-OSHOT is actually trained in two steps. For the first 60k iterations the training is identical to that of OSHOT, while in the last 10k iterations the meta-learning procedure is activated. The inner loop optimization on the self-supervised task runs with iterations and the batch size is 2 to accomodate for two transformations of the original image. Specifically we used gray-scale and color-jitter with brightness, contrast, saturation and hue all set to

. All the other hyperparameters remain unchanged as in OSHOT.

Tran-OSHOT differs from OSHOT only for the last 10k learning iterations, where the batch size is 2 and the network sees multiple images with different visual appearance in one iteration. Meta-OSHOT is instead identical to FULL-OSHOT, made exception for the transformations which are dropped, thus the batch size is 1 also in the last 10k pretraining iterations.

The adaptation phase is the same for all the variants: the model obtained from the pretraining phase is updated via fine-tuning of the self-supervised task. The batch size is equal to 1 and a dropout with probability

= 0.5 is added before the rotation classifier to prevent overfitting. The weight of the auxiliary task is increased to = 0.2 to speed up the adaptation process. All the other hyperparameters and settings are the same used during the pretraining. The number of fine-tuning steps is set to match the number of iterations in meta-training , but we also investigated the effect of increasing for OSHOT (see Section 4.5).

4 Experiments

In this section we present an extensive experimental analysis on the proposed one-shot unsupervised cross-domain detection scenario. In particular, we show the limits of the existing adaptive detection methods and discuss how our proposed approach overcomes them.

4.1 Datasets

We consider a variety of existing datasets, besides including a new one that we created to assess the performance of our method on the challenging social media feed setting.

Visual Object Classes (VOC)Pascal-VOC [everingham2010pascal] is a standard real-world image dataset for object detection benchmarks. Both VOC2007 and VOC2012 contain bounding boxes annotations of 20 common categories. VOC2007 has 5011 images in the train-val split and 4952 images in the test split, while VOC2012 contains 11540 images in the train-val split.

Artistic Media Datasets (AMD) is composed of Clipart1k, Comic2k and Watercolor2k [inoue2018cross], three object detection datasets designed for benchmarking domain adaptation methods when the source domain is VOC. Clipart1k shares its 20 categories with VOC: it has 500 images in the training set and 500 images in the test set. Comic2k and Watercolor2k both have the same 6 classes (a subset of the 20 classes of VOC), and 1000-1000 images in the training-test splits each.

Cityscapes [cordts2016cityscapes] is an urban street scene dataset with pixel level annotations of 8 categories. It has 2975 and 500 images respectively in the training and validation splits. We use the instance level pixel annotations to generate bounding boxes of objects, as in [Chen_2018_CVPR].

Foggy Cityscapes [sakaridis2018semantic] is obtained by adding different levels of synthetic fog to Cityscapes images. We only consider images with the highest amount of artificial fog, thus training-validation splits have 2975-500 images respectively.

KITTI [kitti] is a dataset of images depicting several driving urban scenarios. By following [Chen_2018_CVPR], we use the full 7481 images for both training (when used as source) and evaluation (when used as target).

Social Bikes is our new dataset containing 530 images of scenes with persons/bicycles collected from Twitter, Instagram and Facebook by searching for #bike tags. It was designed as possible target when the source domain is VOC, indeed the two classes person and bicycles are shared among them. Square crops of a subset of the dataset are presented in Figure 3: the images acquired randomly from social feeds show diverse style properties and cannot be grouped under a single shared domain.

Fig. 3: A subset of the Social Bikes dataset. A random data acquisition from multiple users/feeds leads to a target distribution with a large variety of visual styles.

4.2 Experimental Setup and Competitors

To run all the experiments we resized the image’s shorter size to 600 pixels and apply random horizontal flipping during pretraining. The detection performance is assessed considering the IoU threshold at 0.5 for the mAP results. In the following we will use an arrow to indicate the experimental setting and we report the average of three independent runs. We also perform detection error analysis with TIDE [tide-eccv2020]

, a toolbox that allows to estimate how much each type of detection error contributes to the missing mAP. In particular TIDE not only computes false positives and false negatives, but it also classifies all the errors into six categories by computing the maximum IoU between each detection and a ground truth bounding box.

Cls means object localized () correctly but classified incorrectly, Loc means object classified correctly but localized incorrectly (), Both is used when the two situations occurs simultaneously, Dupe means that the detection is correct but the same ground truth bounding box has already been associated with another higher scoring detection, Bkg means detected background as foreground () and Miss is for all the undetected ground truth boxes not already covered by other types of errors.

We consider a plain detection model and several adaptive approaches as benchmark. Baseline is our Faster-RCNN baseline with ResNet50 backbone, trained on the source domain only and deployed on the target without further adaptation. Tran-Baseline is a variant of the baseline obtained by applying on the last 10k training iterations the same data semantic-preserving transformations introduced for FULL-OSHOT. This allows us to assess how much of the improvement is due to the higher data variability rather than to the training strategy. DivMatch [diversify&match_Kim_2019_CVPR] is a cross-domain detection algorithm that, by exploiting target data, creates multiple randomized domains via CycleGAN and aligns their representations using an adversarial loss. SW [Saito_2019_CVPR] aligns source and target features based on global context similarity. SW-ICR-CCR [xuCVPR2020]

adds two regularization modules on top of SW: they push the model to focus less on the non-transferable source background and give more weight to hard-to-align instances. In all the case we use a ResNet-50 backbone pretrained on ImageNet for fair comparison. We remark that the cross-domain algorithms need target data in advance and are not designed to work in our one-shot unsupervised setting, thus we provide them with the advantage of 10 target images accessible during training and randomly selected at each run. We collect average precision statistics during inference under the favorable assumption that the target domain will not shift after deployment.

One-Shot Target
Method person bicycle mAP
Baseline 69.0 74.1 71.6
Tran-Baseline 71.4 74.2 72.8
OSHOT 68.9 74.6 71.8
Tran-OSHOT 71.6 74.0 72.8
Meta-OSHOT 69.5 73.5 71.5
FULL-OSHOT 71.7 74.3 73.0
OSHOT 72.1 74.9 73.5
Tran-OSHOT 73.0 74.7 73.9
Meta-OSHOT 72.6 74.5 73.6
FULL-OSHOT 73.3 75.1 74.2
Ten-Shot Target
DivMatch [diversify&match_Kim_2019_CVPR] 69.5 73.1 71.3
SW [Saito_2019_CVPR] 69.4 73.0 71.2
SW-ICR-CCR [xuCVPR2020] 72.5 77.6 75.1
Whole Target
DivMatch [diversify&match_Kim_2019_CVPR] 73.6 77.1 75.4
SW [Saito_2019_CVPR] 68.6 70.3 69.5
SW-ICR-CCR [xuCVPR2020] 72.0 72.8 72.4
Baseline OSHOT OSHOT
Tran-OSHOT Meta-OSHOT FULL-OSHOT
TABLE I: Results for VOC Social Bikes. The histograms illustrate the detection error analysis performed with TIDE [tide-eccv2020].
One-Shot Target
Method aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP
Baseline 18.5 43.3 20.4 13.3 21.0 47.8 29.0 16.9 28.8 12.5 19.5 17.1 23.8 40.6 34.9 34.7 9.1 18.3 40.2 38.0 26.4
Tran-Baseline 20.1 52.8 21.8 17.1 34.9 45.5 30.5 9.1 33.8 13.6 24.9 9.9 33.5 47.0 41.6 32.7 4.6 17.0 28.9 32.3 27.6
OSHOT 22.5 55.3 22.7 21.4 26.8 53.3 28.9 4.6 31.4 9.2 27.8 9.6 30.9 47.0 38.2 35.2 11.1 20.4 36.0 33.6 28.8
Tran-OSHOT 22.4 49.3 23.8 18.0 31.4 63.0 28.7 6.7 32.8 13.5 26.9 9.1 29.6 51.4 40.0 34.0 12.5 12.8 34.2 32.5 28.6
Meta-OSHOT 24.9 48.1 23.6 21.6 30.7 55.5 30.4 6.1 34.5 14.1 27.3 9.1 30.0 50.8 39.3 37.7 7.9 20.6 38.1 36.8 29.4
FULL-OSHOT 21.3 47.4 24.5 21.0 32.2 52.1 30.5 6.1 36.1 15.0 26.1 4.1 31.8 53.3 43.0 36.6 6.7 15.5 33.5 35.8 28.6
OSHOT 27.3 61.6 23.8 21.1 31.3 55.1 31.6 5.3 34.0 10.9 28.8 7.3 33.1 59.9 44.2 38.8 15.9 19.1 39.5 33.9 30.8
Tran-OSHOT 22.8 48.4 26.1 17.6 33.4 55.6 30.8 4.3 39.2 14.5 25.1 9.1 35.6 62.7 49.0 34.9 16.1 11.2 39.3 34.6 30.5
Meta-OSHOT 29.4 52.9 24.4 22.9 33.1 54.7 32.6 4.3 37.2 16.9 27.6 3.9 32.9 52.2 48.5 41.1 13.0 20.9 40.1 38.8 31.4
FULL-OSHOT 28.7 49.6 27.9 22.9 31.0 57.3 32.5 5.1 36.2 19.8 28.1 3.1 36.8 66.1 50.2 39.6 13.6 14.1 35.0 35.7 31.7
Ten-Shot Target
DivMatch [diversify&match_Kim_2019_CVPR] 19.5 57.2 17.0 23.8 14.4 25.4 29.4 2.7 35.0 8.4 22.9 14.2 30.0 55.6 50.8 30.2 1.9 12.3 37.8 37.2 26.3
SW [Saito_2019_CVPR] 21.5 39.9 21.7 20.5 32.7 34.1 25.1 8.5 33.2 10.9 15.2 3.4 32.2 56.9 46.5 35.4 14.7 15.2 29.2 32.0 26.4
SW-ICR-CCR [xuCVPR2020] 16.4 55.7 24.1 15.5 28.1 32.3 26.8 4.2 31.8 18.8 13.8 9.0 33.7 57.1 54.7 33.6 14.4 11.7 30.5 31.0 27.2
(a) VOC Clipart
One-Shot Target
Method bike bird car cat dog person mAP
Baseline 25.2 10.0 21.1 14.1 11.0 27.1 18.1
Tran-Baseline 35.4 12.2 25.1 11.6 16.8 33.2 22.4
OSHOT 32.5 11.2 24.0 9.1 13.4 29.1 19.9
Tran-OSHOT 32.6 11.8 20.9 9.1 14.4 31.7 20.1
Meta-OSHOT 33.4 11.2 25.5 10.1 12.9 27.9 20.2
FULL-OSHOT 32.2 12.8 23.3 10.9 14.2 33.1 21.1
OSHOT 33.6 12.5 25.7 10.1 16.9 34.9 22.3
Tran-OSHOT 35.7 13.7 27.5 12.3 20.9 39.5 24.9
Meta-OSHOT 40.1 13.2 27.3 11.1 17.9 39.0 24.8
FULL-OSHOT 33.7 13.8 29.9 12.9 20.1 40.7 25.2
Ten-Shot Target
DivMatch [diversify&match_Kim_2019_CVPR] 27.1 12.3 26.2 11.5 13.8 34.0 20.8
SW [Saito_2019_CVPR] 21.2 14.8 18.7 12.4 14.9 43.9 21.0
SW-ICR-CCR [xuCVPR2020] 20.0 13.8 20.6 8.2 17.5 46.4 21.1
(b) VOC Comic
One-Shot Target
Method bike bird car cat dog person mAP
Baseline 62.6 39.7 43.9 31.9 26.7 52.4 42.8
Tran-Baseline 75.0 45.1 47.8 30.7 25.3 53.9 46.3
OSHOT 66.4 45.1 44.9 31.7 28.3 57.6 45.7
Tran-OSHOT 72.0 44.7 47.8 30.3 24.0 53.3 45.4
Meta-OSHOT 66.4 46.8 47.3 33.2 26.1 54.9 45.8
FULL-OSHOT 71.9 45.5 48.2 34.3 22.8 55.5 46.4
OSHOT 68.8 45.1 49.6 32.5 33.8 59.0 48.1
Tran-OSHOT 71.9 41.7 53.8 31.1 30.8 57.1 47.7
Meta-OSHOT 69.3 47.6 51.9 36.3 30.9 58.3 49.0
FULL-OSHOT 74.5 45.6 50.7 34.6 28.5 59.7 48.9
Ten-Shot Target
DivMatch [diversify&match_Kim_2019_CVPR] 64.6 44.1 44.6 34.1 24.9 60.0 45.4
SW [Saito_2019_CVPR] 66.3 41.1 41.1 30.5 20.5 52.3 42.0
SW-ICR-CCR [xuCVPR2020] 64.5 44.1 45.0 28.3 29.9 59.8 45.3
(c) VOC Watercolor
TABLE V: Results for VOC AMD and corresponding histograms with the error analysis performed with TIDE [tide-eccv2020].

width=0.48 One-Shot Target Method person rider car truck bus train mcycle bicycle mAP Baseline 30.4 36.3 41.4 18.5 32.8 9.1 20.3 25.9 26.8 Tran-Baseline 32.1 35.2 42.9 17.8 31.0 4.3 22.6 30.0 27.0 OSHOT 32.2 38.6 39.0 20.5 30.6 12.9 22.4 31.2 28.4 Tran-OSHOT 30.5 37.4 42.7 16.9 29.5 14.5 21.9 30.4 28.0 Meta-OSHOT 30.6 35.1 35.9 16.6 28.4 7.6 18.2 28.4 25.1 FULL-OSHOT 31.7 40.8 43.7 18.3 28.8 11.0 22.8 33.3 28.8 OSHOT 32.7 39.3 41.1 21.1 33.1 12.6 22.7 31.9 29.3 Tran-OSHOT 30.9 38.5 43.0 17.5 32.1 13.9 21.6 30.5 28.5 Meta-OSHOT 32.1 38.2 39.9 17.4 30.9 7.5 21.0 29.2 27.0 FULL-OSHOT 32.0 39.7 43.8 18.8 31.8 10.6 22.1 33.2 29.0 Ten-Shot Target DivMatch [diversify&match_Kim_2019_CVPR] 27.6 38.1 42.9 17.1 27.6 14.3 14.6 32.8 26.9 SW [Saito_2019_CVPR] 25.5 30.8 40.4 21.1 26.1 34.5 6.1 13.4 24.7 SW-ICR-CCR [xuCVPR2020] 29.6 40.8 39.6 20.5 32.8 11.1 24.0 34.0 29.1 Baseline OSHOT OSHOT Tran-OSHOT Meta-OSHOT FULL-OSHOT

TABLE VI: Results for Cityscapes FoggyCityscapes. The histograms illustrate the detection error analysis performed with TIDE [tide-eccv2020].

width=0.48 One-Shot Target Method KITTI Cityscapes Cityscapes KITTI Baseline 26.5 75.1 Tran-Baseline 42.8 75.3 OSHOT 31.0 75.0 Tran-OSHOT 43.1 75.6 Meta-OSHOT 33.6 75.2 FULL-OSHOT 42.9 75.4 OSHOT 31.1 75.0 Tran-OSHOT 43.1 75.6 Meta-OSHOT 34.1 75.2 FULL-OSHOT 43.0 75.4 Ten-Shot Target DivMatch [diversify&match_Kim_2019_CVPR] 37.9 74.1 SW [Saito_2019_CVPR] 39.2 74.6 SW-ICR-CCR [xuCVPR2020] 39.8 74.9

Baseline OSHOT OSHOT
Tran-OSHOT Meta-OSHOT FULL-OSHOT
TABLE VII: mAP of car class in KITTI/Cityscapes detection experiments. The histograms illustrate the detection errors analysis performed for KITTICityscapes with TIDE [tide-eccv2020].

4.3 Performance and Detection Error Analysis

Adapting to social feeds When the data comes from multiple providers, the assumption that all target images originate from the same underlying distribution does not hold and standard cross-domain detection methods are penalized regardless of the number of seen target samples. We pretrain the source detector on VOC, and deploy it on Social Bikes.

We report results in Table I. The mAP performance when allows us to compare the pretraining models before adaptation and already show the advantage of FULL-OSHOT over OSHOT, as well as over the Tran and Meta variants. Specifically, the data transformations support both the Baseline and OSHOT with a gain of about 1 point between the Tran and the respective plain versions, but their use in the meta-learning process of FULL-OSHOT provide the greatest advantage. When all variants of OSHOT obtain an improvement that goes from (OSHOT) to (FULL-OSHOT) points over the Baseline just by adapting on a single test sample. Despite granting them access to a larger set of adaptation samples, domain adaptive algorithms reach at best an advantage of points over FULL-OSHOT, even when exploiting the whole target for the adaptation. When using only ten target samples, two methods out of three show a negative transfer w.r.t. the Baseline.

By looking at the detection error analysis we can see that the adaptation iterations allow OSHOT to reduce the number of False Negatives. Moreover, both Tran-OSHOT and FULL-OSHOT obtain a higher mAP than OSHOT thanks to a lower Miss errors. The performance of FULL-OSHOT confirms that the meta-learning strategy with semantic-preserving data augmentations successfully prepares the model to solve the adaptation task at inference time.

Large distribution shifts Artistic images are difficult benchmarks for cross-domain methods. Unpredictable perturbations in shape and color are challenging to detectors trained only on realistic images. We investigate this setting by training the source detector on VOC and deploying it on Clipart, Comic and Watercolor datasets.

Table V summarizes results on the three adaptation splits. In all the settings, by exploiting one sample at a time with few adaptive iterations () OSHOT and its variants outperform the adaptive detectors despite they can leverage on 10 target samples. More precisely, all the adaptive detectors are not able to work in data scarcity conditions and obtain results comparable or lower to those of the Tran-Baseline and of the pretraining phase of our approach (). We also highlight that when Meta-OSHOT obtains results higher than Tran-OSHOT and only slightly lower on average than FULL-OSHOT, thus the meta-learning strategy alone (without additional data augmentation) prepares the detector to the inference time adaptation task.

By looking at the detection error analysis we can see that, the data augmentation of Tran-OSHOT pushes for a lower number of errors of type Miss, while the meta learning strategy of Meta-OSHOT gets a lower number of Classification error. FULL-OSHOT takes advantage of both obtaining the best overall performance.

Adverse weather Some peculiar environmental conditions, such as fog, may be disregarded in source data acquisition, yet adaptation to these circumstances is crucial for real world applications. We consider the Cityscapes FoggyCityscapes setting by training our base detector on the first domain for 30k iterations without stepdown, as in [cai2019exploring]. We select the best performing model on the Cityscapes validation split and deploy it to FoggyCityscapes.

The experimental evaluation in Table VI shows that domain adaptive detectors struggle when dealing with this kind of shift by using a small adaptation set. SW-ICR-CCR is the only method able to obtain a meaningful improvement over the Baseline although its mAP remains lower than that of the -Baseline. For what concerns OSHOT and its variants, the pretraining alone () helps in gaining a better generalization ability, with all variants but Meta-OSHOT showing higher performance than the Baseline. The advantage is also visible from the error analysis by looking at the Miss type which decreases when passing from the Baseline to OSHOT , reaching its lower value for FULL-OSHOT with . Still the top mAP result in this setting is obtained by OSHOT , indicating that neither the transformations nor the meta-learning strategy are able to prepare the detector to the experienced domain shift.

Cross-camera transfer A train-test dataset bias is unavoidable in practical applications, as for urban scenes collected in different cities and with different cameras. We test adaptation between KITTI and Cityscapes in both directions, considering only the label car as standard practice.

The obtained results are reported in Table VII. Considering the KITTI Cityscapes shift we can see that also in this case the domain adaptive detectors obtain results lower than the Tran-Baseline. In fact here the data transformations seem to play a fundamental role in improving the generalization ability: the pretraining strategy of OSHOT () shows an improvement over the Baseline, but the best results are obtained by Tran-OSHOT and FULL-OSHOT. The following adaptation steps () provide negligible improvements. By looking at the detection error analysis we can see that the semantic-preserving transformations implemented in Tran-OSHOT and FULL-OSHOT allow to greatly reduce the errors of type Miss (see also the False Negatives).

The opposite direction shift Cityscapes KITTI appears less severe, with the Baseline obtaining already good results. The domain adaptive detectors suffer of a small negative transfer and once again the domain augmentation transformations allow Tran-OSHOT and FULL-OSHOT to obtain the highest performance with no adaptation improvements occurring with .

4.4 Comparison with One-Shot Style Transfer

Although not specifically designed for cross-domain detection, in principle it is possible to apply one-shot style transfer methods as an alternative solution for our setting. We use BiOST [Cohen_2019_ICCV], the current state-of-the-art method for one-shot transfer, to modify the style of the target sample towards that of the source domain before performing inference. Due to the time-heavy requirements to perform BiOST on each test sample111

To get the style update, BiOST trains a double-variational autoencoder using the entire source besides the single target sample. As advised by the authors through personal communications, we trained the model for 5 epochs.

, we test it on Social Bikes and on a random subset of 100 Clipart images that we name Clipart100. We compare performance and time requirements of our approach and BiOST on these two targets. Speed has been computed on an RTX2080Ti with full precision settings.

width=0.48 Baseline BiOST OSHOT FULL-OSHOT [Cohen_2019_ICCV] mAP on Clipart100 27.9 29.8 28.2 30.4 mAP on Social Bikes 71.6 71.4 73.5 74.2 Adaptation time - 1.3 1.3 (seconds per sample)

TABLE VIII: Comparison between baseline, one-shot syle transfer and our approach in the one-shot unsupervised cross-domain detection setting.

Table VIII shows the obtained mAP results. On Clipart100, the Baseline obtains mAP. We can see how BiOST is effective in the adaptation from one-sample, gaining points over the baseline. On Social Bikes, instead, BiOST incurs in a slight negative transfer, indicating its inability to effectively modify the source’s style on the images we collected. OSHOT improves over the baseline on Clipart100 but its mAP remains lower than that of BiOST, while it outperforms both the baseline and BiOST on the more challenging Social Bikes. Finally, FULL-OSHOT shows the best results on both the datasets. The last row of the table presents the time complexity of all the considered methods, which is identical for OSHOT and FULL-OSHOT since the number of adaptive iterations is the same. BiOST instead, needs more than 6 hours to modify the style of a single source instance. Moreover we highlight that BiOST works under the strict assumption of accessing at the same time the entire source training set and the target sample. Considering these weaknesses and the obtained results, we argue that existing one-shot translation methods are not suitable for one shot unsupervised cross-domain adaptation.

Fig. 4: Performance of OSHOT at different number of adaptive iterations.

4.5 Increasing the number of Adaptive Iterations

The results till here have shown how OSHOT improves over the existing cross-domain detection methods in the considered one-shot scenario. Moreover, the meta-learned pretrained model FULL-OSHOT provides a consistent advantage over the multi-task version OSHOT. Still, the bi-level optimization at the basis of meta-learning requires backpropagation through the inner process, which means dealing with higher-order derivatives, and the related non-trivial computational and memory burden. Keeping the same conditions in pre-training and deployment means limiting the adaptation steps of FULL-OSHOT to the set with

. In this sense, the multi-task based OSHOT model appears more suitable for all those cases in which the adaptation time is not a strict constraint and it is possible to get up to iterations. We studied the performance of OSHOT in this case and collected the results in the plots of Figure 4. We can observe a positive correlation between the number of finetuning iterations and the final mAP of the model in the earliest steps, while the performance generally reaches a plateau after about 30 iterations: increasing beyond this value does not affect significantly the final results. In the plots we represent with orange stars the performance of FULL-OSHOT at 0 and 5 adaptation iterations. We can see that in five out of six cases FULL-OSHOT has quite better performance w.r.t. OSHOT when they are both tested with 5 adaptation iterations. An higher number of adaptation steps often allows OSHOT to reach and outperform FULL-OSHOT at the cost of a longer learning period.

4.6 Image vs Box rotation

As explained in Section 3, both for the pretraining and for the adaptation phase we can choose to apply rotation either on the whole image or on the object bounding boxes. For all the experiments presented above we focused on the second case. More precisely, for the multi-task based OSHOT we used the ground truth bounding boxes during pretraining and we leveraged on the pseudo-labeled boxes in the adaptation phase. By solving the auxiliary task only on objects we limit the use of background features which may mislead the network towards solutions of the rotation task not based on relevant semantic information (e.g.: finding fixed patterns in images as sky-always-on-top, or exploiting watermarks). To validate our choice we set up two dedicated experiments.

In the first we focus on the pretraining phase and run a qualitative analysis on the effect of learning on VOC by using either or . Then we test the rotation classifier on whole images from the Clipart domain. In Figure 5 we show the results obtained with Grad-CAM [gradcam_2017_ICCV] for the two cases, with heatmap indicating the most relevant regions responsible for recognizing the correct orientation. The Grad-CAM maps refer to the last output of the backbone feature extractor. We can see that, when the rotation classifier is trained on whole images it learns to focus on the background (e.g. the sky and the ground) in order to solve the task. On the contrary, when the boxcrop operation is implemented to train the rotation classifier only on the relevant objects, it learns to look at objects’ features even when it faces an entire image.

In the second experiment we consider the whole process and compare the final performance of our approach when using the object location or the entire image in both pretraining and adaptation. Table IX shows results for VOC AMD and Cityscapes Foggy Cityscapes using OSHOT. We observe that the choice of rotated regions is critical for the effectiveness of the algorithm: the mAP improvements range from to points, indicating that allows to learn features more suitable for the main detection task across domains.

image            
boxcrop            
Fig. 5: Visualization of the most relevant image regions produced by Grad-CAM when classifying the correct rotation with and .

width=0.48 OSHOT VOC Clipart 26.9 30.8 VOC Comic 22.0 22.3 VOC Watercolor 45.7 48.1 Cityscapes Foggy Cityscapes 27.4 29.3

TABLE IX: mAP results comparison on the self-supervised rotation setting: rotating the whole image vs rotating object bounding boxes.
Clipart Comic Watercolor Social Bikes Cityscapes Foggy Cityscapes

Ground Truth

DivMatch [diversify&match_Kim_2019_CVPR]

SW [Saito_2019_CVPR]

SW-ICR-CCR [xuCVPR2020]

OSHOT

FULL-OSHOT

OSHOT

Fig. 6: Qualitative visualizations. The number associated to each detection box is the confidence of the detector.

4.7 Qualitative Results

Figure 6 shows some examples of detections on images extracted from all the datasets considered in our work. We present as reference the ground truth bounding boxes, with different colors being used for different classes, as well as the predictions produced by DivMatch [diversify&match_Kim_2019_CVPR], SW [Saito_2019_CVPR] and SW-ICR-CCR [xuCVPR2020]. All of them show less precise results than our approach. Specifically by looking at artistic images in the first three columns from the left, we can see that the domain adaptive detectors often produce several false positives. Moreover, they show detection failures in the datasets of the fourth-seventh columns. It is also possible to notice how a higher number of adaptation iterations allow OSHOT to fix its errors (compare OSHOT and ) by detecting objects that it previously missed (see the first column), by correcting a wrong classification (see the dogs in the second column), or by improving objects localization (see the fifth column). In many of these cases FULL-OSHOT has results more similar to OSHOT than to OSHOT , confirming its faster adaptation ability. FULL-OSHOT is also the only detector able to correctly identify the bicycle on the t-shirt in the third column.

5 Extension to Object Classification

The idea of exploiting self-supervised learning as an auxiliary objective to adapt on a single test sample can be easily extended to other scenarios where the main supervised task is different from detection. Indeed its effectiveness has been recently showcased in [efrostesttimeICML2020] for the classification task, but the experimental analysis in that work involved only distribution shifts due to synthetic data corruptions. Here we maintain the focus on shifts due to significant changing in domain style and analyze the performance of our approach for classification across photos, art paintings, cartoons and sketches. Specifically we rely on the PACS dataset [hospedalesPACS] that covers exactly those domains and object categories. We adopt a multi-target setting with each domain used in turn as source and all the remaining three domains as target. We re-designed our model by building over a ResNet-18 architecture pretrained on ImageNet and modified to include the same rotation branch used for detection. Here the rotation task is performed on the whole images discarding the cross-task self-training due to the lack of localization information in source supervision. Both the OSHOT pretraining and the competitors trainings are performed for 10k iterations using a batch size of and a learning rate of . The optimizer is standard SGD with momentum 0.9 and weight decay equals to . The rotation self-supervised task for OSHOT has weight both during pretraining and adaptation. OSHOT variants (Tran, Meta and FULL OSHOT) take advantage, in the pretraining phase, of additional 5k training iterations performed however with a batch size reduced to 1 and a learning rate reduced of a factor. The differences between the OSHOT variants are the sames of the detection case.

The obtained results are presented in Table X where we compare all the variants of our approach with the non adaptive ResNet-18 Baseline and the Tran-Baseline. Moreover we include as reference the Minimum Class Confusion (MCC, [MCCeccv20]) method, a state-of-the-art multi-target adaptation approach based on a simple loss that evaluates the class correlation on target predictions. MCC works in the standard unsupervised domain adaptation setting and therefore needs an unlabeled adaptation set coming from the target domain at training time. We thus provide it with access to the whole target training split or to 10 samples extracted from it in two different experiments and evaluate the performance on the test split. OSHOT outperforms the considered competitors: including the self-supervised pretraining () already provides better results than the Baselines, and performing few adaptive iterations further improves the final classification accuracy. Once again the best results are obtained by FULL-OSHOT with a higher accuracy () than MCC on the whole target (). We highlight that in this case the tailored use of the data transformations as part of the meta-learning process plays an important role. In fact the transformations alone allow the Baseline to earn less than one point in average accuracy, while their integration inside OSHOT pushes Tran-OSHOT and FULL-OSHOT to obtain the best performance.

width=0.48 One-Shot Target Method Art painting Cartoon Sketch Photo AVG Baseline 61.3 68.7 30.5 34.5 48.8 Tran-Baseline 62.4 68.2 34.3 33.3 49.6 OSHOT 64.6 72.5 34.0 39.2 52.6 Tran-OSHOT 65.9 71.7 35.3 38.7 52.9 Meta-OSHOT 62.3 71.3 33.8 37.0 51.1 FULL-OSHOT 63.3 72.9 34.1 40.7 52.7 OSHOT 63.1 71.7 42.4 43.5 55.2 Tran-OSHOT 63.8 71.2 47.5 46.3 57.2 Meta-OSHOT 63.4 71.5 45.3 45.1 56.3 FULL-OSHOT 64.0 73.1 43.9 48.1 57.3 Ten-Shot Target MCC [MCCeccv20] 62.2 62.5 46.2 43.2 53.5 Whole Target MCC [MCCeccv20] 68.9 52.7 53.5 49.6 56.2

TABLE X: Classification accuracy for multi target PACS [hospedalesPACS] experiments with ResNet-18 baseline.

6 Conclusion

This paper focuses on the one-shot unsupervised cross-domain detection, a scenario involving deployment conditions significantly different from those experienced at training time, with target samples drawn from a variety of visual domains not known in advance and not accessible during source training. These conditions mimic those encountered when monitoring image feeds on social media, where algorithms are called to adapt to a new visual domain and can only rely on one single image at inference time. We showed that existing cross-domain detection methods struggle in this setting, as they are all explicitly designed to adapt from far larger quantities of target data. We presented OSHOT, the first deep architecture able to reduce the domain gap between source and target distribution by leveraging over one target image. Our approach is based on a multi-task structure that exploits self-supervision and cross-task self-labeling. Moreover we introduced a meta-learning formulation for the pretraining phase that simulates single-sample cross-domain learning episodes and further improves the generalization abilities of the detector. Extensive quantitative experiments and qualitative analyses demonstrated the effectiveness of the proposed adaptive detection method and showed how the same strategy can be easily exploited for cross-domain object classification.

Acknowledgments

This work was partially founded by the ERC grant 637076 RoboExNovo (FCB, AD, SB, BC) and took advantage of the GPU donated by NVIDIA (Academic Hardware Grant, TT).

References