Recently, tremendous advances in semantic segmentation have been made    . These approaches often rely on deep convolutional neural networks (CNN)  trained on a large-scale classification dataset , which is then transfered to the segmentation task based on the mask annotations   . However, the annotation for pixel-wise segmentation masks usually requires considerable human effort. In addition, the construction of a semantic segmentation dataset covering diverse appearances, view-points or scales of objects is also costly and difficult. These limitations hinder the development of semantic segmentation which generally requires large-scale data for training.
While a large collection of fully annotated images are difficult to obtain, weakly-labeled yet related videos are abundant on video sharing websites, e.g., YouTube.com, especially for human segmentation task. Intuitively, when a human instance is moving in the video, the inherent motion cues with the aid of an imperfect human detector may potentially help identify the human masks out of the background . Thus in this paper, we target at using the video-context derived human masks from raw YouTube videos to iteratively train and update a good segmentation neural network, instead of using a limited number of single image mask annotations like in traditional approaches. The video-context is used to infer the human masks by exploiting spatial and temporal contextual information over video frames. Note that our framework can be applied to general object segmentation tasks, especially for moving objects. This paper focuses on human segmentation, as human-centric videos are the most common on YouTube. Figure 1 provides an overview of our unified framework containing two integrated steps, i.e., the video-context guided human mask inference and the CNN-based human segmentation network learning.
In the first step, given a raw video, we extract the supervoxels, which are the spatio-temporal analogs of superpixels, to provide a bottom-up volumetric segmentation that tends to preserve object boundaries and motion continuousness . The spatio-temporal graph is built on the superpixels within supervoxels. To remove the ambiguity in determining the instances of interest in the video, we resort to an imperfect human detector  and region proposal method  to generate the candidate segmentation masks. These masks are then combined with the confidence maps predicted by the currently trained CNN to provide the unary energy of each node. The graph-based optimization is then performed to find optimal human label assignments of the nodes by maximizing both appearance consistency within the neighboring nodes and the long-term label consistency within each supervoxel.
In the second step, the video-context derived human masks extracted from massive raw YouTube videos are then utilized to train and update a deep convolutional neural network (CNN). One important issue in training with those raw videos is the existence of noisy labels within these extracted masks. To effectively reduce the influence of noisy data, we utilize the sample-weighted loss during the network optimization. The trained network in turn makes better segmentation predictions for the key frames in each video, which can help refine the video-context derived human masks. This process iterates to gradually update the video-context derived human masks and the network parameters until the network is mature.
We evaluate our method on the PASCAL VOC 2012 segmentation benchmark . Our very-weakly supervised learning framework by using raw YouTube videos achieves significantly better performance than the previous weakly supervised methods (i.e., using box annotations)   as well as the fully supervised (i.e., using mask annotations) methods   . By combining with limited annotated data, our weakly supervised variant (i.e., using the box annotations on VOC) and the semi-supervised variant (i.e., using the mask annotations on VOC) yield superior accuracies than the previous methods   using extensive extra 123k annotations on Microsoft COCO 
. Note that the general image-level supervision is also utilized in our approach as we pre-train our neural network on ImageNet.
2 Related Work
Semantic Segmentation. Deep convolutional neural networks (CNN) have achieved great success with the growing training data on object classification    . However, current available datasets for object detection and segmentation often contain a relatively limited number of labeled samples. Most recent progress on object segmentation    was achieved by fine-tuning the pre-trained classification network with limited mask annotations. These limited data hinder the advance of semantic segmentation to more challenging scenarios. Existing segmentation methods   
explored using bounding box annotations instead of mask annotations. Differ from these previous methods, our framework iteratively refines the video-context derived human masks using the updated segmentation network, and in turn improves the network based on these masks. Differ from the semi-supervised learning (using a portion of mask annotations) and weakly supervised learning (using the object class or bounding box annotations), the proposed very-weakly supervised learning only relies on weakly labeled videos and an imperfect human detector. It is true that training the human detector needs a certain number of bounding box annotations. Instead of directly using those annotated boxes to train the human segmentation network, our model can be progressively improved by gradually mining more variant instances from weakly labeled videos.
Video Segmentation. Unsupervised video segmentation focused on extracting coherent groups of supervoxels by considering the appearance and temporal consistency. These methods tend to over-segment an object into multiple parts and provide a mid-level space-time grouping, which cannot be directly used for object segmentation. Recent approaches proposed to upgrade the supervoxels to object-level segments  . Their performance is often limited by the incorrect segment masks.
Semi-supervised Learning. To minimize human efforts, some image-based attempts       have been devoted to learning reliable models with very few labeled data for object detection. Among these methods, the semantic relationships  were further used to provide more constraints on selecting instances. In addition, the video-based approaches     utilized motion cues and appearance correlations within video frames to augment the model training.
3 Our Framework
Our framework is Figure 1 illustrates our very-weakly supervised learning framework for video-context guided human segmentation.
3.1 The Iterative Learning Procedure
The proposed framework is applicable for training all fully supervised network structures based on CNN, such as FCN  and DeepLab-CRF method . In this paper, we adopt the original version of DeepLab-CRF method  (i.e., without using multiscale and Large-FOV) as the basic structure due to its leading accuracy and competitive efficiency. Also, many weakly supervised competing methods   using object class annotations and bounding box annotations only reported their results based on DeepLab-CRF .
Our learning process is iteratively performed to train and update the network with the video-context derived human masks and then refine these masks based on the improved network. Note that the category-level human annotations on ImageNet are used since our segmentation model is finetuned on the pre-trained VGG model. The human masks generated from YouTube videos may be labeled with incorrect categories. These noises may degrade the performance of our framework, especially in the early iterations where the learned network is more vulnerable to noises. To reduce disturbance of noisy labels, the sample-weighted loss is utilized to train the network. During the network training, more considerations should be given to the video-context derived human masks with higher labeling quality111The higher labeling quality means that a derived human mask contains fewer incorrectly labeled pixels, as defined in Section 3.2.. Suppose in the -th iteration of the learning process, we collect training frames from the videos. For each training frame selected from the video set , the video-context derived human mask is inferred by conditioning on the network parameter in the -th iteration and the video information, i.e., . The corresponding labeling quality for is denoted as . The network optimization in every iteration is thus formulated as a pixel-level regression problem from the training images to the generated masks. Specifically, the objective function to be minimized can be written as
where is the target label at the -th pixel of the image , indicates the pixel number of each image and is the predicted pixel-level label produced by the network with the parameter .
is the pixel-wise softmax loss function. The targets to be optimized in our task are the network parameters and the video-context derived human masks of all training frames.
An iterative learning procedure is proposed to find the solution. With the video-context derived human mask and labeling quality for each training frame fixed, we can update the network parameter . The problem thus becomes a segmentation network learning problem with the sample-weighted loss. The parameter
can be updated by back-propagation and stochastic gradient descent (SGD), as in. In turn, after the network parameter in the -th iteration is updated, we can refine the video-context derived human masks by the video-context guided inference, i.e., inferring the . We will give more details about the computation of and in Section 3.2. The segmentation network is initialized by the model pre-trained on the ImageNet classification dataset. In this way, the general image-level supervision provided by the ImageNet is naturally embedded in our models. This network is then trained in every iteration based on the refined video-context derived human masks. This process is iteratively performed until no further improvement is observed. The fully-connected conditional random field (CRF)  is adopted to further refine the results.
3.2 Video-context Guided Human Mask Inference
In this subsection, we introduce the details of generating the video-context derived human masks from raw videos along with an imperfect human detector. Given the network parameter , the video-context derived human masks are obtained by solving , and the labeling quality for each frame is accordingly predicted. The index is omitted for simplicity in the following. We crawl about 5,000 videos which may contain humans from YouTube.com. Following , we use the keywords from the PASCAL VOC collection to prune the videos that are unrelated to the person category. In the crawled videos, the collected video set includes approximate noisy videos that contain none of person instances. The spatio-temporal graph optimization is performed to extract video-context derived human masks.
Video Pre-processing. For each video, the supervoxels are first extracted, which are space-time regions computed with the mid-level unsupervised video segmentation method . We empirically extract all the supervoxels at the 10-th level of the tree, which is a good tradeoff between semantic meaning and fine detail preserving properties. Though the supervoxels for each instance are unlikely to remain stable through the whole video due to pose changing and background cluttering, the supervoxels often persist for a series of frames due to the temporal continuity. Each video can be split into many video shots which are divided when over half of the supervoxels across two adjacent frames change (e.g., the supervoxels of some objects are lost or a new object appears).
Spatio-temporal Graph Optimization. We project each supervoxel into each of its children frames to obtain the corresponding spatial superpixel as a node of the graph. Notably, the object boundary within each supervoxel can be better preserved in the key frame where large appearance and motion changes occur. We thus select the key frames as the initial candidate set by the criterion that more than supervoxels change compared with their previous frame. The graph optimization is performed on these selected key frames in each shot.
Formally, the spatio-temporal graph structure consists of the nodes and the edges , as shown in Figure 2. Let be the set of spatial superpixel nodes over the entire video, where refers to the number of key frames. contains nodes belonging to the -th frame, which is a collection of nodes , where is the number of nodes in the -th frame. We assign the variable to each node, which is either human (+1) or other content (-1). The target is to obtain a labeling , where denotes the labels of nodes belonging to the -th frame. The edge set is defined as the set of spatial edges. A spatial edge exists between the neighboring pair of nodes in a given key frame. Finally, we use to denote the set of supervoxels. Each element represents each supervoxel. We denote as the set of labels assigned to the nodes within the supervoxel . For each node , we compute its visual feature, i.e, the concatenation of bag-of-words (75 dim) from RGB, Lab and HOG descriptors. The visual dissimilarity between two nodes is computed by the Euclidean distance.
To enforce the local label smoothness and long-range temporal coherence over the supervoxels, the energy function over is defined as
The optimal human masks are obtained by minimizing Eqn. (2): . The unary potential accounts for the cost of assigning each node as the human or others. The pairwise potential promotes smooth segmentation by penalizing neighboring nodes with different labels. The higher order potential ensures long term label consistency along each supervoxel.
Unary Potential: The unary potential of each node is computed based on the imperfect human detection results and the predicted confidence maps by the updated CNN in the previous iteration. First, to roughly locate the instance of interest, human detection is performed on each key frame . We use the object detection method in  to detect human instances and only those boxes with scores higher than are selected. For each detected box, an optimal region proposal is selected. The state-of-the-art region proposal method, i.e., Multiscale Combinatorial Grouping (MCG) , is adopted to generate about 2,000 region proposals per image. Denote and as the candidate proposal and the detected box, respectively. For each box , we expect to pick out a candidate proposal
that has a large overlap with the box and also a high estimated confidence from the CNN prediction. The is computed by
where is the intersection-over-union ratio computed from the box and the tight bounding box of the region proposal . The term denotes the mean of the predicted confidences within region proposal . The proposals with the same tight bounding box can be distinguished by their predicted confidences. The optimal region proposal for each detected box can be gradually refined along with the prediction by the improved CNN. We use to denote the generated proposal mask for each frame by combining all selected proposals
. The probability of each nodeto be the human is computed as
where and are set as empirically. represents the percentage of the spatial super-pixel node contained within the mask . is the pixel count within the node , and is the sum of the predicted probabilities. The unary potential of each node is computed by
After the network is updated, the unary potential can be accordingly updated to generate better video-context derived human masks. Note that in the beginning of our learning process, the part will be eliminated in Eqn. (3) and (5) to infer the human masks, since the segmentation network is not yet trained. Based on the initialized bounding box generated by the imperfect object detector, our method can gradually segment all human masks in the video.
Pairwise Potential: We use the standard pairwise terms for edges to ensure the local label smoothness:
where is set as the inverse of the mean of all individual distances, following .
Higher Order Potential: The supervoxel label consistency potential is defined to ensure the long-term coherence within each supervoxel. We adopt the Robust model  to define this potential:
where denotes the labels of all nodes within the supervoxel , and is the number of nodes within the supervoxel that is not assigned with the dominant label, i.e., . is a truncation parameter to control how rigidly we enforce the consistency within each supervoxel. A higher should be set for the supervoxel with more confidence. The penalty indicates that the less uniform supervoxel should have less penalty for label inconsistencies. Following , we set it as , where
is the RGB variance in the supervoxeland is set as the inverse of the mean of the variances of all supervoxels.
The energy function defined in Eqn. (2) can be efficiently minimized using the expansion algorithm . We set the parameter for all the videos. The optimal label assignments corresponding to the minimum energy yield our desired video-context derived human masks. The mask of each key image is denoted as . The video-context derived human masks can be utilized to update the CNN. The labeling quality of each human mask is estimated as the mean of the predicted probabilities on the spatial superpixels which are assigned to be human. To ensure the data diversity and reduce the effect of noisy labels during training, only up to key frames with the highest labeling qualities are selected upon the initial candidates. Different key frames may be selected during the different iterations of the learning process. We also select some negative images (i.e., no human appears) randomly from the frames in which no human instance is detected. The proposed framework can then iteratively refine the human masks by re-estimating the unary potentials based on the improved CNN. Although only the spatial superpixel nodes with high confident detection results are assigned with high possibilities, the strong spatial and motion coherence constraints, which are incorporated by the pair-wise and higher-order potentials, can effectively facilitate mining more diverse instances with variant poses, views and background clutters.
Dataset. The proposed framework is evaluated on the PASCAL VOC 2012 segmentation benchmark . The performance is measured in terms of pixel intersection-over-union (IoU) on the class. The segmentation part of the original VOC 2012 dataset contains 1,464 train, 1,449 val, and 1,456 test images, respectively. Our framework can be boosted by only using the weakly labeled YouTube videos and an imperfect human detector. In total, video-context derived human masks are produced from about video shots, in which about of the images contain no human instances. We also report the results of our variants using the extra 10,582 bounding box annotations and mask annotations, provided by . Extensive evaluation of the proposed method is primarily conducted on the PASCAL VOC 2012 val set and we also report the performance on the testing set to compare with the state-of-the-arts by submitting the results to the official PASCAL VOC 2012 server.
|Iteration||Ours (image-based)||Ours (w/o pair)||Ours (w/o higher)||Ours|
Training Strategies. We use the weakly-supervised object detector proposed in  for detecting human individuals, leading to the imperfect detection results.  proposed a weakly-supervised learning framework for training object detectors with weakly labeled YouTube videos, where only two annotated bounding boxes for the person label are used to initialize the object detectors, and then massive YouTube videos are used to enhance the object detectors. We borrow their trained object detectors in this work. Since their pre-trained detectors can output the bounding boxes and confidences of all object classes, we only the detection results for the person category and ignore other results. The segmentation network is initialized by the publicly released VGG-16 model , which is pre-trained on the ImageNet classification dataset . This model is also used by all competitors        . We run iterations for training the DCNN and refining the video-context derived human masks. For each iteration, we use a mini-batch size of 20 for the SGD training. The initial learning rate is set as
and divided by 10 after every 20 epochs. The network training is performed for about 60 epochs. In each iteration of our learning process, fine-tuning the network with the refined video-context derived human masks takes about two days on a NVIDIA Tesla K40 GPU. It takes about 2 seconds for testing an image.
4.1 Evaluation of Our Learning Framework
Table I reports the comparison results of the video-context guided inference and the image-based segmentation in different iterations. The version using image-based segmentation, i.e., “Ours (image-based segmentation)”, is achieved by only minimizing the unary potential of each node based on the extracted supervoxels in Eqn. (2). In this case, the appearance consistency and temporal continuity for the assignments of nodes would not be utilized to generate the human masks. In terms of our full version (“Ours”), only of IoU is obtained by only using the YouTube videos in the beginning. After 10 iterations, we achieve a substantial increase, i.e., obtaining of IoU. The proposed framework is performed for 10 iterations because only slight increase (i.e., ) is observed after 10 iterations. The increases in IoU are very large in the early iterations (e.g., over of IoU after the second iteration), since most of the easy human instances can be recognized and segmented out by the updated network. After the network is gradually improved and the video-context derived human masks are refined, more diverse instances can be collected, which leads to better network capability. Large performance gap in IoU can be observed by comparing “Ours (image-based segmentation)” with “Ours”, e.g., drop in IoU after 10 iterations. This significant inferiority demonstrates the effectiveness of using the video-context derived inference. The effectiveness of using pairwise and higher-order potentials can be demonstrated by comparing “Ours (w/o pair)” and “Ours (‘w/o higher)” with “Ours”, respectively.
|method||sample-weighted loss||negative images||IoU|
4.2 Comparisons of Supervision Strategies
Table III shows the comparison results of using different strategies of supervision. Training with 160k video-context derived human masks, our method can yield a score of . We also report two results by using the extra 10,582 training images with bounding box annotations and mask annotations, respectively, provided by , which provides the pixel-wise annotations for all 10,582 training images on the 20 object classes. The bounding box annotations and mask annotations are only used as the extra data to train the network in the last iteration. In terms of using bounding box annotations, we select the region proposals which have largest overlaps with the ground-truth boxes, and then use them as the human masks for training. To combine with the video-context derived human masks for training, we set the labeling quality of the proposal mask from bounding box annotation or mask annotation as , and then fine-tune the network based on the combined set. By using the extra annotated bounding boxes, only a slight increase in IoU is obtained. This insignificant change may be because the number (10k) of bounding boxes is small compared to our large number (160k) of video-context derived human masks. When replacing the box annotations with mask annotations, significant increase can be observed, i.e., vs. . This means that carefully annotated masks contain more local detail information or difficult training samples (e.g., extremely small instances or heavily occluded instances) that may be lost within the generated video-context derived human masks.
|method||supervision||prediction type||mask||box||auto mask||auto box||training data||IoU on person||IoU on car|
|weakly (object class)||multi-class||-||10k||-||-||VOC||28.2||44.9|
4.3 Comparisons of Network Training Strategies
In Table II, we evaluate different network training strategies by using the video-context derived human masks with possibly noisy labels. For the version without using the sample-weighted loss, all video-context derived human masks are treated as contributing equally to training the whole network. In this case, we observe that about drop takes place in IoU, compared with our full training strategy. We also validate the effectiveness of using more negative frames collected from raw videos to train the network. About decrease in IoU can be observed when comparing the version without using more negative frames with our full version.
4.4 Comparisons of Ways of Using videos
Here we have adopted three simple strategies to evaluate the usage of the video information for boosting the segmentation network. First, we test the performance of directly using the frames of all videos with the image-level “person” label as training data. The segmentation networks are thus trained by using the image-level supervision, similar to , resulting in in terms of IoU on the person label, which is better than of . This verifies the large-scale frames in videos can help train better segmentation networks than using the limited number of images. Second, since many frames may not contain any person instances, we further evaluate whether the EM procedure can facilitate improving the capability of the segmentation network. Specifically, ten iterations are performed to progressively reduce the effect of noisy frames. After each iteration, we use the currently trained segmentation network to test all frames, and the frames predicted as containing less than foreground pixels are eliminated during the training in next iteration. We find that employing such EM procedure can obtain IoU on person label. Third, we also test the result of using the image co-localization method  to discover the bounding boxes of human instances. The segmentation network can thus be trained with bounding box supervisions of these mined instances. Its final result in terms of IoU on person category is 40.6%, which is worse than the second strategy. These results of using different strategies further justify the effectiveness of the proposed procedure.
4.5 Comparisons with State-of-the-art Methods
In Table III, we compare our method with the state-of-the-art methods, including Hyper-SDS , CFM , FCN , TTI , DeepLab-CRF , CRF-RNN , WSSL  and BoxSup , on the Pascal VOC 2012 testing set. All these methods use the pre-trained VGG model  to initialize the network parameters. Among all of these competitors, WSSL  and BoxSup  use different supervision strategies (e.g., object class, bounding box or mask annotations) to train the network. We use the same network setting as in  and  for fair comparison. We also test the DeepLab-CRF  method on the 2-class segmentation task (“DeepLab-CRF-person”), i.e., only person and background classes predicted for each pixel, which is the same setting as used in this paper. The decrease in IoU compared with DeepLab-CRF  may be because the contextual information from the other object classes is not utilized during training the 2-class network. Our method that only uses the weakly labeled videos and an imperfect human detector achieves in IoU, which is better than the previous fully supervised methods by a considerable margin, e.g., of DeepLab-CRF  and of FCN  on VOC 2012. Remarkably, all of them use all the 10k annotated masks on VOC 2012.
Moreover, we compare our results with other semi-supervised methods  . The proposed method is significantly superior over the previous method  which is supervised with object class annotations, i.e., vs. . The proposed method achieves and gain, compared with the methods using bounding box annotations on the VOC dataset, i.e., BoxSup  and WSSL , respectively. The superiority over WSSL  and BoxSup  can also be observed when comparing with their semi-supervised variants, i.e., replacing about bounding box annotations with mask annotations on the VOC 2012 dataset. In addition, these previous methods reported the results after augmenting the training data by the large-scale Microsoft COCO dataset . The 123,287 images with available mask annotations are provided on COCO. Our results by only using weakly labeled videos and an imperfect human detector are comparable with the fully supervised baselines   using extensive extra 123k COCO images, e.g., of our method vs. of the CRF-RNN . It is slightly better than of the BoxSup  using 123k annotated bounding boxes.
Our semi-supervised variant using the 10k mask annotations on VOC dataset achieves the IoU score of , which is higher than the performance of all the previous human segmentation methods. Most recently, unpublished results in  reached in IoU by using all 10k pixel-wise annotations. This method  utilized the piecewise training of CRFs instead of the simple fully-connected CRF used by our solution and other state-of-the-arts   .
5 Result Visualization
We show the video-context derived human masks in the videos obtained by our method in Figure 3. All masks are generated in the last iteration of the learning process. Although the YouTube videos are often with low resolution, diverse view points and heavy background clutters, our method can successfully segment out the human instances with different scales or occlusions. In Figure 4, we show the results of our method and its variant using image-based segmentation on the VOC 2012 validation dataset.
6 Conclusion and Future Work
In this paper, we present a very-weakly supervised learning framework that learns to segment human by watching YouTube videos along with an imperfect human detector. In turn, the updated network can help generate more precise video-context derived human masks. This process iterates to gradually improve the network. In future work, we plan to extend our framework to generic semantic segmentation.
This work was in part supported by State Key Development Program under Grant NO. 2016YFB1001000 and sponsored by CCF-Tencent Open Fund.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062, 2014.
-  T. Chen, L. Lin, L. Liu, X. Luo, and X. Li. Disc: Deep image saliency computing via progressive representation learning. IEEE transactions on neural networks and learning systems, 27(6):1135–1149, 2016.
-  X. Chen, A. Shrivastava, and A. Gupta. Neil: Extracting visual knowledge from web data. In ICCV, 2013.
-  J. Choi, M. Rastegari, A. Farhadi, and L. S. Davis. Adding unlabeled samples to categories by learned attributes. In CVPR, pages 875–882, 2013.
-  J. Dai, K. He, and J. Sun. Convolutional feature masking for joint object and stuff segmentation. arXiv preprint arXiv:1412.1283, 2014.
-  J. Dai, K. He, and J. Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. arXiv preprint arXiv:1503.01640, 2015.
-  S. K. Divvala, A. Farhadi, and C. Guestrin. Learning everything about anything: Webly-supervised visual concept learning. In CVPR, 2014.
M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman.
The pascal visual object classes (voc) challenge.
International journal of computer vision, 88(2):303–338, 2010.
-  M. Guillaumin, D. Küttel, and V. Ferrari. Imagenet auto-annotation with segmentation propagation. International Journal of Computer Vision, 110(3):328–348, 2014.
-  B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In ICCV, pages 991–998, 2011.
-  B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. arXiv preprint arXiv:1411.5752, 2014.
-  S. D. Jain and K. Grauman. Supervoxel-consistent foreground propagation in video. In ECCV, pages 656–671. 2014.
-  A. Joulin, K. Tang, and L. Fei-Fei. Efficient image and video co-localization with frank-wolfe algorithm. In ECCV, pages 253–268. Springer, 2014.
-  P. Kohli, L. Lúbor, and P. H. Torr. Robust higher order potentials for enforcing label consistency. International Journal of Computer Vision, 82(3):302–324, 2009.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
-  C. Li, L. Lin, W. Zuo, W. Wang, and J. Tang. An approach to streaming video segmentation with sub-optimal low-rank decomposition. 25(5):21947–1960, 2016.
-  X. Liang, S. Liu, Y. Wei, L. Liu, L. Lin, and S. Yan. Towards computational baby learning: A weakly-supervised approach for object detection. ICCV, 2015.
-  G. Lin, C. Shen, I. Reid, and A. v. d. Hengel. Efficient piecewise training of deep structured models for semantic segmentation. arXiv preprint arXiv:1504.01013, 2015.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755. 2014.
J. Long, E. Shelhamer, and T. Darrell.
Fully convolutional networks for semantic segmentation.
IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
-  M. Mostajabi, P. Yadollahpour, and G. Shakhnarovich. Feedforward semantic segmentation with zoom-out features. arXiv preprint arXiv:1412.0774, 2014.
-  G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a dcnn for semantic image segmentation. arXiv preprint arXiv:1502.02734, 2015.
-  J. Pont-Tuset, P. Arbeláez, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping for image segmentation and object proposal generation. In arXiv:1503.00848, March 2015.
-  A. Prest, C. Leistner, J. Civera, C. Schmid, and V. Ferrari. Learning object class detectors from weakly annotated video. In CVPR, 2012.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014.
-  O. Russakovsky, L.-J. Li, and L. Fei-Fei. Best of both worlds: human-machine collaboration for object annotation. In CVPR, pages 2121–2131, 2015.
-  A. Shrivastava, S. Singh, and A. Gupta. Constrained semi-supervised learning using attributes and comparative attributes. In ECCV, pages 369–383, 2012.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, arXiv:1409.1556, 2015.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
-  K. Tang, A. Joulin, L.-J. Li, and L. Fei-Fei. Co-localization in real-world images. In CVPR, 2014.
-  K. Tang, R. Sukthankar, J. Yagnik, and L. Fei-Fei. Discriminative segment annotation in weakly labeled video. In CVPR, pages 2483–2490, 2013.
K. Wang, D. Zhang, Y. Li, R. Zhang, and L. Lin.
Cost-effective active learning for deep image classification.In IEEE Transactions on Circuits and Systems for Video Technology, 2016.
-  X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In ICCV, pages 2794–2802, 2015.
-  C. Xu, C. Xiong, and J. J. Corso. Streaming hierarchical video segmentation. In ECCV, pages 626–639, 2012.
-  J. Xu, A. G. Schwing, and R. Urtasun. Learning to segment under various forms of weak supervision. In CVPR, 2015.
-  D. Zhang, O. Javed, and M. Shah. Video object segmentation through spatially accurate and temporally dense extraction of primary object regions. In CVPR, pages 628–635, 2013.
R. Zhang, L. Lin, R. Zhang, W. Zuo, and L. Zhang.
Bit-scalable deep hashing with regularized similarity learning for image retrieval.IEEE Trans. Image Processing, 24(12):4766–4779, 2015.
-  S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional random fields as recurrent neural networks. arXiv preprint arXiv:1502.03240, 2015.