An image pyramid can extend many object detection algorithms to solve detection on multiple scales. However, interpolation during the resampling process of an image pyramid causes gradient variation, which is the difference of the gradients between the original image and the scaled images. Our key insight is that the increased variance of gradients makes the classifiers have difficulty in correctly assigning categories. We prove the existence of the gradient variation by formulating the ratio of gradient expectations between an original image and scaled images, then propose a simple and novel gradient normalization method to eliminate the effect of this variation. The proposed normalization method reduce the variance in an image pyramid and allow the classifier to focus on a smaller coverage. We show the improvement in three different visual recognition problems: pedestrian detection, pose estimation, and object detection. The method is generally applicable to many vision algorithms based on an image pyramid with gradients.READ FULL TEXT VIEW PDF
Gradient and image pyramid are one of the essential parts for computer vision. Well-known methods based on magnitudes and orientations of gradients are Histogram of Oriented Gradients (HOG)[dalal2005histograms], Scale-Invariant Feature Transform (SIFT) keypoint [lowe1999object], and Aggregated Channel Feature (ACF) [dollar2014fast]. An image pyramid [adelson1984pyramid] is a collection of resampled images from an original image; the pyramid is used to make a computer vision problem invariant over multiple scales. Many object detectors (e.g., ACF-AdaBoost [dollar2014fast], and Viola and Jones [viola2004robust]), scan a detection window of a fixed size over an image pyramid.
However, interpolation while constructing the image pyramid usually causes a difference between the gradients of the original image and the scaled image [ruderman1994statistics]. When pixels in downsampled images are computed using a bilinear function over corresponding pixels, the intensities of the pixels have similar distribution and magnitude, but the skipped pixels in downsampled images increase gradients (the first derivative of intensity). In contrast, the inserted pixels in upsampled images decrease the gradients. We define this difference between original image and scaled image as gradient variation.
Our method is inspired by the gradient variation that causes the decrease in accuracy of the classifiers. The increased variance of gradients over the image pyramid increases the coverage of the classifier. Thus, the increased coverage decreases the accuracy and precision of the classifiers [geman1992neural, jones2003fast, park2010multiresolution, yan2013robust].
Hence, we propose a simple and novel gradient normalization method by analyzing the gradient variation in the viewpoint of the classifier (Fig. 1). The proposed method defines the original image as reference, and normalizes gradients from other resampled images to the reference image. The normalized gradient, which is similar to the gradients of original images, reduces the variance, and increases the performance of the classifiers with negligible increase in computing time.
In this section, we discuss the change of gradients that occurs in an image pyramid, which is used to apply the fixed-size detector to multi-scale detection.
We compared the difference between original images and scaled images that include objects of the same identity, and observed that the first derivatives of intensity are greater in downsampled images than in the original images, even if the distributions of intensity are similar [dollar2014fast, ruderman1994statistics, dollar2010fastest].
We theoretically show the gradient difference by computing the ratio of gradient expectations between the two images under three conditions: computing gradient using a central difference method [gonzalez2009digital], sampling images using a bilinear interpolation [gonzalez2009digital], and decomposing the problem to a one-dimensional form.
Let an image be a sequence that consists of the pixels from an upsampled image with scale . is a reference image, which is original and the only natural data in an image pyramid. A linear interpolation computes an upsampled image by inserting new pixels between two adjacent pixels on the original image. The pixels on the upsampled image are partitioned into inherited pixels and interpolated pixels. The number of inserted pixels between two adjacent inherited pixels is and is therefore an integer 0. The pixels in an upsampled image consist of a set of inherited pixels and sets of inserted pixels, so the total number of pixels is . The pixels in an upsampled image is approximated as
where is the distance between and the nearest inherited pixel leftward.
We use a central difference function as a gradient function and an intermediate difference function is appeared in the calculation of gradient expectation, by substituting. When a gradient is computed at , subtracts the pixel at from the adjacent pixel at , and subtracts the neighbor pixels at and :
where is an interval of a differential.
To prove the existence of the gradient difference, we compute the gradient expectation at scale :
The input images that are used in object detection typically have enough pixels to assume that is infinite. is approximated as :
Eq. 4 reveals that a gradient difference between the upsampled and reference image exists, and is determined by scale and the gradient expectation of an intermediate difference function .
We define gradient variation as the difference of gradient expectation between an original image and a scaled image, and formulate the variation as the ratio of the gradient expectations. We formulate the equations for the integer variable , however, the practical algorithm estimates the real value of through nearest neighbor or linear approximation. With the same concept, we expand the equations of the gradient variation to a real value. Gradient variation between the upsampled image and the reference image is computed as
where is a constant. Because Eq. 5 is only available for upsampled images due to the definition of , we replace the reference image and the scaled image with each other to represent downsampling. We invert to re-define it to the range , then calculate the inverse of as
The practical interval of for upsampling has a smaller rate of change than the interval of for downsampling, and the constant is close enough to for degree reduction ( and in INRIA dataset). We approximate the last term as in the numerator for upsampling, to simplify the gradient variation .
The gradient variation for resampled images are computed as
Eq. 7 shows that is a decreasing function. These trends imply that upsampling decreases gradients and downsampling increases gradients. This phenomenon implies that the gradient distribution of the resampled images is different from the gradient distribution of the reference images; the increased variance increases the difficulty of training the classifiers [geman1992neural, james2013introduction].
We propose a normalization method to eliminate the gradient variation. The proposed method normalizes the gradients of the resampled image to the gradients of the reference image to reduce the variance of gradients. The reduced variance makes the classifier concentrate on a small coverage, and improves overall precision and accuracy of detection [geman1992neural, jones2003fast, park2010multiresolution, yan2013robust]. We obtain the gradient normalization function as the inverse of the gradient variation as
with a bias term: and .
The normalization function consists of polynomials of degree 1 for upsampling and of degree 2 for downsampling. We compute the optimal coefficients of for the training set. Given a training image , we define an error criterion , which is a mean squared error to minimize the difference between the normalized gradient and the reference gradient:
where is a set of scales and is a set of training images.
The separate training of the normalization functions for upsampling and downsampling requires an equality constraint. We impose an equality constraint between original and normalized gradients at the reference scale. The equality constraint prevents gradient normalization at reference images and keeps the continuity of the gradient normalization function at reference image, and is defined as
The error criterion and the equality constraint is combined into a Lagrangian
where subscripts and represent downsampled, upsampled and reference, respectively, are Vandermonde matrices of scales, are coefficients of the proposed polynomial equation,
are vectors of the ratio of gradients, andare Lagrange multipliers. The optimal coefficients of are computed by minimizing the Lagrangian [duda2012pattern].
We compared the fitting accuracy of the gradient normalization between the proposed function and a power law function (Fig. 2). A power law was dealt with to represent the study of natural image statistics by Ruderman and Bialek [ruderman1994statistics] and Dollar et al. [dollar2014fast].
We show the effectiveness of the gradient normalization in object detection with three applications: pedestrian detection, pose estimation, and object detection.
ACF [dollar2014fast, acfcode] is widely used for pedestrian detection [nam2014local, zhang2015filtered]. In this paper, we build ACF++, which is a simplified version of the filtered channel features based detector [zhang2015filtered]. We combine the original ACF and the differences between two neighboring features, which are part of the checkerboards filters. Approximated ACF++ is a version of ACF++ with a fast feature pyramid. We evaluate ACF++ using normalized gradient (N-ACF++) on INRIA dataset [dalal2005histograms]. N-ACF++ is trained in the same way as ACF++ without gradient normalization. To train the gradient normalization function, we collected all gradient expectations of both positive and negative images over scales from to in increments of . We applied our normalization method in both training and testing, and we only normalized gradient magnitudes to naturally spread out over the gradient-based features such as HOG. N-ACF++(PowerLaw) is a version of N-ACF++ trained by a power law function. The proposed normalization method with ACF++ shows the improvement from 12.51% to 9.73% log average miss rate (Fig.4).
Yang et al. [yang2013articulated, articulated] proposed flexible mixtures of parts model (FMM) to estimate human poses. Each appearance model is trained as a filter of HOG [dalal2005histograms] based features that consist of contrast-sensitive HOG, contrast-insensitive HOG, and magnitudes. We evaluate the normalized FMM (N-FMM) on PARSE dataset [ramanan2006learning]. As the negative images, we used the INRIA dataset [dalal2005histograms]
. We achieved 2%p overall improvement on probability of correct keypoint (Table1).
The deformable part model (DPM) from Felzenszwalb et al. [felzenszwalb2010object, voc-release5] is a representative approach for object detection. DPM consists of mixtures of multiscale deformable part models that are trained using partially labeled data, and each part model includes appearance and spatial models. Appearance models are trained as a filter of HOG [dalal2005histograms] based features that consist of contrast-sensitive HOG, contrast-insensitive HOG, and magnitudes. We evaluate the normalized DPM (N-DPM) on PASCAL 2007 dataset [pascal-voc-2007]. We achieve 1%p overall improvement and 4.4%p maximum improvement in average precision scores [pascal-voc-2007] (Table 2).
Our research reinterprets the gradient variation in the viewpoint of the classifier. Unlike conventional approaches concentrating on computing resized images, our approach concentrates on decreasing the coverage of the classifier to enhance the focus of the classifier. We prove the existence of the gradient variation by formulating the ratio of gradient expectations between an original image and scaled images, then estimate a normalization function to eliminate the effect of this variation. Our calculations and experiments prove the validity of the gradient normalization function. The proposed method is not restricted to object-detection applications, but can be applied in many gradient-based studies with negligible cost of computing time. We will adopt our study to deep learning based features.
This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP)(2014-0-00059, Development of Predictive Visual Intelligence Technology), MSIP (Ministry of Science, ICT and Future Planning), Korea, under the “ICT Consilience Creative Program” (IITP-R0346-16-1007) supervised by the IITP, and MSIP(Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2017-2016-0-00464) supervised by the IITP.