Dealing with sequences in the RGBDT space

by   Gabriel Moyà, et al.

Most of the current research in computer vision is focused on working with single images without taking in account temporal information. We present a probabilistic non-parametric model that mixes multiple information cues from devices to segment regions that contain moving objects in image sequences. We prepared an experimental setup to show the importance of using previous information for obtaining an accurate segmentation result, using a novel dataset that provides sequences in the RGBDT space. We label the detected regions ts with a state-of-the-art human detector. Each one of the detected regions is at least marked as human once.



There are no comments yet.


page 3

page 4


Self-supervised classification of dynamic obstacles using the temporal information provided by videos

Nowadays, autonomous driving systems can detect, segment, and classify t...

Self-Supervised Linear Motion Deblurring

Motion blurry images challenge many computer vision algorithms, e.g, fea...

Adaptive Foreground and Shadow Detection inImage Sequences

This paper presents a novel method of foreground segmentation that disti...

A Dataset for Provident Vehicle Detection at Night

In current object detection, algorithms require the object to be directl...

Cascading Convolutional Temporal Colour Constancy

Computational Colour Constancy (CCC) consists of estimating the colour o...

CycleSegNet: Object Co-segmentation with Cycle Refinement and Region Correspondence

Image co-segmentation is an active computer vision task which aims to se...

LiftFormer: 3D Human Pose Estimation using attention models

Estimating the 3D position of human joints has become a widely researche...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning approaches have proven to solve per frame problems accurately. Most of the current research in computer vision is focused on working with single images without taking in account temporal information. Our point of view is that the key for solving a wide range of computer vision problems rely on the concept of the sequence. In a sequence, part of the necessary information to solve the problem on the current frame is given by the previous ones.

Tasks that are considered simple like detecting changes in a scene and segmenting the foreground from background, or detecting moving objects in image sequences are still challenging.

1.1 The RGBDT space

Color information is the most used feature in the computer vision history, standard cameras only capture this type of information. In [22] several important challenges color information were described, such as: shadows, changes in scene illumination, camouflage and foreground aperture. Classical problems based on color information continue to be challenging for modern approaches, as described in [20], where 29 different algorithms were evaluated and compared. A feasible solution to overcome the limitations of the classical color-based approaches consists of adding new information to our proposed algorithms. Nowadays we can find devices that provides us novelty cues like depth or thermal information.

Depth sensors provide geometrical information about the scene where each pixel value represents the distance from the device to the point in the real world. The depth channel differs in its characteristics from color channels. In particular, it has a significant amount of missing information from instances in which the sensor is unable to obtain information at certain pixels. Depth devices suffer several problems such as: depth camouflage, specular materials, near objects, remote parts of the scene, non reachable areas and shadows which were described in [18] .

Thermal imagery comes from passive sensors that capture the infrared radiation emitted by all objects with a temperature range, so instead of color or geometry it adds temperature information and eliminates the typical illumination problems of normal greyscale and RGB cameras. Some problems of this type of information are: reflections of the thermal radiation and a halo effect can also be observed around warm items [9].

1.2 Background subtraction

There is a large body of literature on the subject of background subtraction, here we focus on approaches that fuse more than on information cue. Most of the explained techniques modify traditional background subtraction approaches by applying the same algorithm for depth or thermal (in addition to the color channels) and suggesting some heuristics to address the heterogeneous characteristics of these different cues.

Harville et al.

presented an approximation to Gaussian mixture modeling at each pixel 

[10]. A multidimensional Gaussian mixture distribution was constructed, with three components in a luminance-normalized color space and one depth channel. Special processing was performed to address absent depth pixels. No update phase was described; therefore, this algorithm can only be used in static scenes.

Hofmann and Rigoll also proposed a Mixture of Gaussians approach where depth and infrared data are combined to detect foreground objects. Each pixel was classified by binary combinations of foreground masks. The performance of this approach is limited because a failure of one of the models affects the final pixel classification 


Camplani et al. proposed a per-pixel background modeling approach that fuses different statistical classifiers based on depth and color data by means of a weighted average combination [2]

. A mixture of Gaussian distributions was used to model the background, and a uniform distribution was used for modeling the foreground. Same authors presented another approach in 

[1] based on the fusion of multiple region-based classifiers. Foreground objects were detected by combining a region-based foreground depth data prediction with different background models. The information given by these modules is fused in a mixture-of-experts fashion to improve the foreground detection accuracy.

Leens et al. presented a new approach using RGB and ToF (Time-of-Flight) cameras based on a Parzen windows-like process. Each model was processed independently and the foreground masks are then combined using logical operations and then post-processed with morphological operators [15].

Enrique Fernandez-Sanchez et al. proposed an adaptation of the Codebook [13] background subtraction algorithm using a four-channel codebook [8]. Depth information was also used to bias the distance in chromaticity space associated with a pixel according to the depth measurements. Therefore, when the depth value is invalid, the detection depends entirely on color information.

Clapes et al. presented a background subtraction technique in which a four-dimensional Gaussian distribution was used as the first step of the user identification and object recognition surveillance system. As they used a single Gaussian approximation, the algorithm was not able to manage multi-modal backgrounds [4]. A similar problem can be observed in other approaches, such as [10] and [14].

1.3 Recognizing people

Multiple cues are also used to detect objects in image sequences, a survey about this subject with a Kinect (RGBD camera) can be found in [11].

Jun Liu et al. developed an automatic detection and tracking of people in cluttered and dynamic environments using a single RGBD camera. The original RGBD pixels are transformed to a Point Ensemble Image (PEI), they demonstrate that human detection and tracking in 3D space can be performed.They have some missing detections mainly caused by depth data loss [17].

Spinello and Arras created a new way to detect humans in images with the Histograms of Oriented Depth (HOD) detector [21]

. Authors developed a human detector using depth data instead of color data and fused the Histograms of Oriented Gradients (HOG) and HOD detectors together. HOG and HOD descriptors were classified then fused using a learned Support Vector Machines (SVM).

Choi et al

. combined five observation models: HOG, shape from depth data, front face detection, skin color, and motion detection. The image data was gathered with an RGBD camera and a Reversible-Jump Markov Chain Monte Carlo (RJ-MCMC) algorithm was used to detect people in a frame. Their results showed that the combination of different detection cues provided more reliable results and the advantages of different cues could overcome the disadvantages of other cues 


Matt Davis and Ferat Sahin presented a novel method to identify humans by combining features detected in RGB, depth, and thermal images. They used HOG features extracted from the three image types and created a multi-layer classifier that did not overcome a simple SVM thermal HOG classifier 


Cristina Palmero et al

. tried to address the problem of human body segmentation from multi-modal visual cues. They proposed a novel RGB-Depth-Thermal dataset. In order to classify regions of images as human or not humans they fused the features from each cue in a Gaussian Mixture Model (GMM). They classified patches of the image based on a random forest 


1.4 Aim of the paper

Our aim is to create an unified model that mixes multiple information cues from the devices to segment foreground regions in image sequences. This regions are those ones that contains moving objects. We try to label those regions applying a state-of-the-art people detector. In this paper we present an experimental setup using a dataset that provides sequences in the RGBDT space, adapting our previous work [18] to include thermal information.

The paper is organized as follows. In Section 1, we explain the context of the problem, the related work and our goal. In Section 2, we describe the proposed model to detect objects and the challenges of working with sequences in a multidimensional space. In Section 3, we describe an experimental configuration of the proposed method with three different sequences and the preliminary results we obtained. Finally, we present the conclusions.

2 Detecting objects in the RGBDT space

In order to select the regions with moving objects in an image sequence, we use a non-parametric algorithm that is capable to mix the color, depth and thermal information in a low-level way using the previous information as the reference to segment the current frame.

The scene modelling consists on a Kernel Density Estimation (KDE) process. Given the last

observations of a pixel, denoted by , in the d-dimensional observation space

, which enclose the sensor data values, it is possible to estimate the probability density function (pdf) of each pixel with respect to all previously observed values 



where is a multivariate kernel, satisfying and 0. H is the bandwidth matrix, which is a symmetric positive dd-matrix.

The choice of the bandwidth matrix H is the single most important factor affecting the estimation accuracy because it controls the amount and orientation of smoothing induced [24].

Diagonal matrix bandwidth kernels allow different amounts of smoothing in each of the dimensions and are the most widespread due to computational reasons [23]. The most commonly used kernel density function is the Normal function, in our approach is selected

The final probability density function can be written as


Given this estimate at each pixel, a pixel is considered foreground if its probability is under a certain threshold, see Fig 1.

(a) RGB Image.
(b) Thermal image.
(c) Depth image.
(d) Segmentation result.
Figure 1: Result of the modelling algorithm after 500 frames. Figures (a), (b) and (c) represent the input cues and (d) depicts the foreground mask.

From the scene modelling algorithm results we extract the Regions Of the Interest (ROI) of each frame. After applying a morphological opening operation, the ROI is defined as the bounding box of the remaining relevant blobs. Each region of interest should contain a invididual object instance. In our case, different objects may overlap in space, resulting in a bigger region that can contains more than one item.

2.1 RGBDT issues

Each type of information has its own particularities. We are not interested in creating a model for each cue, so we studied the characteristics of each one in order to include them properly into the unified model.

Usually color information is useful for suppressing shadows from detection by separating color information from lightness information. To construct a robust algorithm that is independent of illumination variations, we separated color information from luminance information using a non-luminance dependent color space. Chromaticity is the description of a color ignoring its luminance, and it can be described as a combination of hue and saturation. Given the device’s three color channels R, G, B, the chromaticity coordinates r, g and b are: , , where:  [16]. In our model we use two dimensions: and .

The depth channel, D, has a significant amount of missing information from instances in which the sensor is unable to estimate the depth at certain pixels. For this purpose, we properly defined the Absent Depth Observations (ADO) to include them in the scene model by constructing a probabilistic model. Therefore, absent observations can be handled in a unified manner [18].

Thermal channel, T, is similar to a grayscale color image. It seems that the thermal cue can segment the human body more accurately, but it can include some undesired reflections and illuminate warm objects [19]. It can be added directly as a new dimension to our background model. As in indoor scenarios when people appears in the scene, provokes an effect similar to switch-lighting, we decided to use a high bandwith value with this channel in order to smooth this effect.

Figure 2: Description of the whole process. First image corresponds to RGB frame. Second image depicts the foreground mask. Third image show the area labeled as region of interest. Last image show the detection of a human (yellow box) by the algorithm described in [7].

3 An initial experiment: recognizing people

To test our approach we used a novel dataset. This dataset described in [19], features a total of 11,537 frames RGB-Depth-Thermal frames divided into three indoor scenes, of which 5724 were annotated. That contains up to three individuals who appear concurrently in three indoor scenarios, performing diverse actions that involve interaction with objects. The RGBDT data stream was recorded using a Microsoft Kinect for XBOX360, which captures the RGB and depth image streams, and an AXIS Q1922 thermal camera.

Scenes 1 and 2 were situated in a closed meeting room with little natural light to disturb the sense of depth, while scene 3 was situated in an area with wide windows and a substantial amount of sunlight. The human subjects were walking, reading, using their phones or interacting with each other.

Results of detecting objects algorithm are depicted in Fig. 3 and Fig. 4 from Scenes 2 and 3. We can observe that after applying the scene modelling process we obtained the ROIs with the moving objects.

Next step was to label the selected regions that corresponds to human instances. A first approach to check the viability of the proposed solution to recognize people in image sequences was to apply the state-of-the-art human detector described in [7] to the ROIs of each frame,. See the whole process in Fig. 2.

Only one out of fifteen people instances that appear in the three sequences weren’t labeled due to its small time in the scene and its strange pose.

Figure 3: Results of our algorithm applied to the second sequence of the dataset.
Figure 4: Results of our algorithm applied to the third sequence of the dataset.

4 Conclusions

In this article we presented an approach to label regions that contain moving objects in image sequences by applying a probabilistic non-parametric model that mixes multiple information cues. We prepared an experimental setup using a dataset that provides sequences in the RGBDT space to show the importance of using previous information in order to obtain an accurate segmentation result. We adapted our previous model [18] to add a new cue: thermal information. We labeled people in the regions with moving objects using a state-of-the art algorithm. All but one of our detected regions is at least labeled as human in one frame.


  • [1] M. Camplani, C. R. del Blanco, L. Salgado, F. Jaureguizar, and N. García. Multi-sensor background subtraction by fusing multiple region-based probabilistic classifiers. Pattern Recognition Letters, 50:23–33, Dec. 2014.
  • [2] M. Camplani and L. Salgado. Background foreground segmentation with RGB-D Kinect data: An efficient combination of classifiers. Journal of Visual Communication and Image Representation, 25(1):122–136, Jan. 2013.
  • [3] W. Choi, C. Pantofaru, and S. Savarese. Detecting and tracking people using an rgb-d camera via multiple detector fusion. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 1076–1083. IEEE, 2011.
  • [4] A. Clapés, M. Reyes, and S. Escalera. Multi-modal user identification and object recognition surveillance system. Pattern Recognition Letters, 34(7):799–808, May 2013.
  • [5] M. Davis and F. Sahin. Hog feature human detection system. In Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on, pages 002878–002883. IEEE, 2016.
  • [6] A. Elgammal, D. Harwood, and L. Davis. Non-parametric model for background subtraction. In European conference on computer vision, pages 751–767. Springer, 2000.
  • [7] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627–1645, 2010.
  • [8] E. J. Fernandez-Sanchez, J. Diaz, and E. Ros. Background subtraction based on color and depth using active sensors. Sensors (Basel, Switzerland), 13(7):8895–915, Jan. 2013.
  • [9] R. Gade and T. B. Moeslund. Thermal cameras and applications: a survey. Machine vision and applications, 25(1):245–262, 2014.
  • [10] G. Gordon and T. Darrell. Background estimation and removal based on range and color. Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on.. Vol. 2. IEEE, (June), 1999.
  • [11] J. Han, L. Shao, D. Xu, and J. Shotton. Enhanced computer vision with microsoft kinect sensor: A review. IEEE transactions on cybernetics, 43(5):1318–1334, 2013.
  • [12] M. Hofmann and G. Rigoll. Depth gradient based segmentation of overlapping foreground objects in range images. In 2010 13th International Conference on Information Fusion (2010), 2010.
  • [13] K. Kim, Chalidabhongse, T. H., D. Harwood, and L. Davis. Background modeling and subtraction by codebook construction. Image Processing, ICIP’04. 2004 International Conference on. Vol. 5. IEEE, pages 2–5, 2004.
  • [14] V. Kolmogorov, A. Criminisi, A. Blake, G. Cross, and C. Rother. Bi-Layer Segmentation of Binocular Stereo Video. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2:407–414, 2005.
  • [15] J. Leens, S. Piérard, O. Barnich, M. Van Droogenbroeck, and J.-M. Wagner. Combining color, depth, and motion for video segmentation. In Computer Vision Systems, pages 104–113. Springer, 2009.
  • [16] M. D. Levine and M. D. Levine. Vision in man and machine, volume 574. McGraw-Hill New York, 1985.
  • [17] J. Liu, Y. Liu, G. Zhang, P. Zhu, and Y. Q. Chen. Detecting and tracking people in real time with rgb-d camera. Pattern Recognition Letters, 53:16–23, 2015.
  • [18] G. Moya-Alcover, A. Elgammal, A. Jaume-i-Capó, and J. Varona. Modeling depth for nonparametric foreground segmentation using rgbd devices. Pattern Recognition Letters, 96:76–85, 2017.
  • [19] C. Palmero, A. Clapés, C. Bahnsen, A. Møgelmose, T. B. Moeslund, and S. Escalera. Multi-modal rgb–depth–thermal human body segmentation. International Journal of Computer Vision, 118(2):217–239, 2016.
  • [20] A. Sobral and A. Vacavant. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Computer Vision and Image Understanding, 122:4–21, May 2014.
  • [21] L. Spinello and K. O. Arras. People detection in rgb-d data. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 3838–3843. IEEE, 2011.
  • [22] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principles and Practice of Background Maintenance. Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, (September), 1999.
  • [23] M. P. Wand and M. C. Jones. Comparison of smoothing parameterizations in bivariate kernel density estimation. Journal of the American Statistical Association, 88(422):520–528, 1993.
  • [24] M. P. Wand and M. C. Jones. Kernel smoothing, volume 60. Crc Press, 1994.