Detector With Focus: Normalizing Gradient In Image Pyramid

by   Yonghyun Kim, et al.

An image pyramid can extend many object detection algorithms to solve detection on multiple scales. However, interpolation during the resampling process of an image pyramid causes gradient variation, which is the difference of the gradients between the original image and the scaled images. Our key insight is that the increased variance of gradients makes the classifiers have difficulty in correctly assigning categories. We prove the existence of the gradient variation by formulating the ratio of gradient expectations between an original image and scaled images, then propose a simple and novel gradient normalization method to eliminate the effect of this variation. The proposed normalization method reduce the variance in an image pyramid and allow the classifier to focus on a smaller coverage. We show the improvement in three different visual recognition problems: pedestrian detection, pose estimation, and object detection. The method is generally applicable to many vision algorithms based on an image pyramid with gradients.



There are no comments yet.


page 1

page 2

page 3

page 4


Consistent Scale Normalization for Object Recognition

Scale variation remains a challenge problem for object detection. Common...

Template Matching based Object Detection Using HOG Feature Pyramid

This article provides a step by step development of designing a Object D...

An Analysis of Scale Invariance in Object Detection - SNIP

An analysis of different techniques for recognizing and detecting object...

Scale Normalized Image Pyramids with AutoFocus for Object Detection

We present an efficient foveal framework to perform object detection. A ...

Locating Cephalometric X-Ray Landmarks with Foveated Pyramid Attention

CNNs, initially inspired by human vision, differ in a key way: they samp...

REPLICA: Enhanced Feature Pyramid Network by Local Image Translation and Conjunct Attention for High-Resolution Breast Tumor Detection

We introduce an improvement to the feature pyramid network of standard o...

Deep Spatial Pyramid: The Devil is Once Again in the Details

In this paper we show that by carefully making good choices for various ...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Gradient and image pyramid are one of the essential parts for computer vision. Well-known methods based on magnitudes and orientations of gradients are Histogram of Oriented Gradients (HOG) 

[dalal2005histograms], Scale-Invariant Feature Transform (SIFT) keypoint [lowe1999object], and Aggregated Channel Feature (ACF) [dollar2014fast]. An image pyramid [adelson1984pyramid] is a collection of resampled images from an original image; the pyramid is used to make a computer vision problem invariant over multiple scales. Many object detectors (e.g., ACF-AdaBoost [dollar2014fast], and Viola and Jones [viola2004robust]), scan a detection window of a fixed size over an image pyramid.

However, interpolation while constructing the image pyramid usually causes a difference between the gradients of the original image and the scaled image [ruderman1994statistics]. When pixels in downsampled images are computed using a bilinear function over corresponding pixels, the intensities of the pixels have similar distribution and magnitude, but the skipped pixels in downsampled images increase gradients (the first derivative of intensity). In contrast, the inserted pixels in upsampled images decrease the gradients. We define this difference between original image and scaled image as gradient variation.

Figure 1: The proposed method constructs an image pyramid, and computes normalized gradients using the proposed normalization function. Unlike an image pyramid and a fast feature pyramid, the proposed method enhances the quality of samples in both training and testing to improve the accuracy of classifiers.

Our method is inspired by the gradient variation that causes the decrease in accuracy of the classifiers. The increased variance of gradients over the image pyramid increases the coverage of the classifier. Thus, the increased coverage decreases the accuracy and precision of the classifiers [geman1992neural, jones2003fast, park2010multiresolution, yan2013robust].

Hence, we propose a simple and novel gradient normalization method by analyzing the gradient variation in the viewpoint of the classifier (Fig. 1). The proposed method defines the original image as reference, and normalizes gradients from other resampled images to the reference image. The normalized gradient, which is similar to the gradients of original images, reduces the variance, and increases the performance of the classifiers with negligible increase in computing time.

2 Gradient Variation in Multi-Scale

In this section, we discuss the change of gradients that occurs in an image pyramid, which is used to apply the fixed-size detector to multi-scale detection.

2.1 Analysis of Gradient in Multi-Scale

We compared the difference between original images and scaled images that include objects of the same identity, and observed that the first derivatives of intensity are greater in downsampled images than in the original images, even if the distributions of intensity are similar [dollar2014fast, ruderman1994statistics, dollar2010fastest].

We theoretically show the gradient difference by computing the ratio of gradient expectations between the two images under three conditions: computing gradient using a central difference method [gonzalez2009digital], sampling images using a bilinear interpolation [gonzalez2009digital], and decomposing the problem to a one-dimensional form.

Let an image be a sequence that consists of the pixels  from an upsampled image with scale . is a reference image, which is original and the only natural data in an image pyramid. A linear interpolation computes an upsampled image by inserting new pixels between two adjacent pixels on the original image. The pixels on the upsampled image are partitioned into inherited pixels and interpolated pixels. The number of inserted pixels between two adjacent inherited pixels is and is therefore an integer 0. The pixels in an upsampled image consist of a set of inherited pixels and sets of inserted pixels, so the total number of pixels is . The pixels in an upsampled image is approximated as


where is the distance between and the nearest inherited pixel leftward.

We use a central difference function  as a gradient function and an intermediate difference function  is appeared in the calculation of gradient expectation, by substituting. When a gradient is computed at , subtracts the pixel at from the adjacent pixel at , and subtracts the neighbor pixels at and :


where is an interval of a differential.

To prove the existence of the gradient difference, we compute the gradient expectation  at scale :


The input images that are used in object detection typically have enough pixels to assume that is infinite. is approximated as :


Eq. 4 reveals that a gradient difference between the upsampled and reference image exists, and is determined by scale and the gradient expectation of an intermediate difference function .

2.2 Formulation of Gradient Variation

We define gradient variation as the difference of gradient expectation between an original image and a scaled image, and formulate the variation as the ratio of the gradient expectations. We formulate the equations for the integer variable , however, the practical algorithm estimates the real value of through nearest neighbor or linear approximation. With the same concept, we expand the equations of the gradient variation to a real value. Gradient variation  between the upsampled image  and the reference image is computed as


where is a constant. Because Eq. 5 is only available for upsampled images due to the definition of , we replace the reference image and the scaled image with each other to represent downsampling. We invert to re-define it to the range , then calculate the inverse of as


The practical interval of for upsampling has a smaller rate of change than the interval of for downsampling, and the constant is close enough to for degree reduction ( and in INRIA dataset). We approximate the last term as  in the numerator for upsampling, to simplify the gradient variation .

The gradient variation for resampled images are computed as


Eq. 7 shows that is a decreasing function. These trends imply that upsampling decreases gradients and downsampling increases gradients. This phenomenon implies that the gradient distribution of the resampled images is different from the gradient distribution of the reference images; the increased variance increases the difficulty of training the classifiers [geman1992neural, james2013introduction].

3 Gradient Normalization

We propose a normalization method to eliminate the gradient variation. The proposed method normalizes the gradients of the resampled image to the gradients of the reference image to reduce the variance of gradients. The reduced variance makes the classifier concentrate on a small coverage, and improves overall precision and accuracy of detection [geman1992neural, jones2003fast, park2010multiresolution, yan2013robust]. We obtain the gradient normalization function as the inverse of the gradient variation  as


with a bias term: and .

The normalization function consists of polynomials of degree 1 for upsampling and of degree 2 for downsampling. We compute the optimal coefficients of for the training set. Given a training image , we define an error criterion , which is a mean squared error to minimize the difference between the normalized gradient and the reference gradient:


where is a set of scales and is a set of training images.

The separate training of the normalization functions  for upsampling and downsampling requires an equality constraint. We impose an equality constraint between original and normalized gradients at the reference scale. The equality constraint prevents gradient normalization at reference images and keeps the continuity of the gradient normalization function at reference image, and is defined as


The error criterion and the equality constraint is combined into a Lagrangian


where subscripts and represent downsampled, upsampled and reference, respectively, are Vandermonde matrices of scales, are coefficients of the proposed polynomial equation,

are vectors of the ratio of gradients, and

are Lagrange multipliers. The optimal coefficients of are computed by minimizing the Lagrangian [duda2012pattern].

Figure 2: Illustration on the collected data for the normalization function from scale to scale , and on the estimated value using the proposed function and a power law. Our normalization function fits the data over every scales, whereas the power law fails the extremes; our function also has smaller RMSE () than RMSE () of the power law. The data is collected in INRIA dataset.

We compared the fitting accuracy of the gradient normalization between the proposed function and a power law function (Fig. 2). A power law was dealt with to represent the study of natural image statistics by Ruderman and Bialek [ruderman1994statistics] and Dollar et al. [dollar2014fast].

4 Experiments

We show the effectiveness of the gradient normalization in object detection with three applications: pedestrian detection, pose estimation, and object detection.

4.1 Pedestrian Detection

Figure 4: The log-average miss rate of ACF++, Approximated ACF++, N-ACF++(PowerLaw), N-ACF++ on INRIA dataset.

ACF [dollar2014fast, acfcode] is widely used for pedestrian detection [nam2014local, zhang2015filtered]. In this paper, we build ACF++, which is a simplified version of the filtered channel features based detector [zhang2015filtered]. We combine the original ACF and the differences between two neighboring features, which are part of the checkerboards filters. Approximated ACF++ is a version of ACF++ with a fast feature pyramid. We evaluate ACF++ using normalized gradient (N-ACF++) on INRIA dataset [dalal2005histograms]. N-ACF++ is trained in the same way as ACF++ without gradient normalization. To train the gradient normalization function, we collected all gradient expectations of both positive and negative images over scales from to in increments of . We applied our normalization method in both training and testing, and we only normalized gradient magnitudes to naturally spread out over the gradient-based features such as HOG. N-ACF++(PowerLaw) is a version of N-ACF++ trained by a power law function. The proposed normalization method with ACF++ shows the improvement from 12.51% to 9.73% log average miss rate (Fig.4).

4.2 Pose Estimation

Yang et al. [yang2013articulated, articulated] proposed flexible mixtures of parts model (FMM) to estimate human poses. Each appearance model is trained as a filter of HOG [dalal2005histograms] based features that consist of contrast-sensitive HOG, contrast-insensitive HOG, and magnitudes. We evaluate the normalized FMM (N-FMM) on PARSE dataset [ramanan2006learning]. As the negative images, we used the INRIA dataset [dalal2005histograms]

. We achieved 2%p overall improvement on probability of correct keypoint (Table 


l—[1pt]c—c—c—c—c—c—c—c & Avg & Head & Shou & Elbo & Wris & Hip & Knee & Ankle
[1pt]- FMM [yang2013articulated] & 72.3 & 89.0 & 85.3 & 66.0 & 46.3 & 76.5 & 76.3 & 66.3
N-FMM & 74.2 & 91.0 & 86.8 & 67.6 & 49.5 & 80.2 & 77.6 & 66.8

Table 1: Probability of correct keypoint for FMM and N-FMM (using normalized gradients) on PARSE dataset.

4.3 Object Detection

The deformable part model (DPM) from Felzenszwalb et al. [felzenszwalb2010object, voc-release5] is a representative approach for object detection. DPM consists of mixtures of multiscale deformable part models that are trained using partially labeled data, and each part model includes appearance and spatial models. Appearance models are trained as a filter of HOG [dalal2005histograms] based features that consist of contrast-sensitive HOG, contrast-insensitive HOG, and magnitudes. We evaluate the normalized DPM (N-DPM) on PASCAL 2007 dataset [pascal-voc-2007]. We achieve 1%p overall improvement and 4.4%p maximum improvement in average precision scores [pascal-voc-2007] (Table 2).

¿m1cm—[1pt]¿m1cm—¿m1cm—¿m1cm—[1pt]¿m1cm—¿m1cm & DPM & N-DPM & & DPM & N-DPM
[1pt]- plane & 33.3 & 34.2 & table & 24.6 & 27.3
bike & 59.7 & 60.7 & dog & 12.2 & 12.5
bird & 10.4 & 10.8 & horse & 56.4 & 57.0
boat & 15.5 & 16.6 & mbike & 47.7 & 48.9
bottle & 27.1 & 27.2 & person & 42.6 & 43.2
bus & 51.2 & 52.8 & plant & 14.3 & 14.5
car & 58.2 & 58.2 & sheep & 18.6 & 23.0
cat & 23.9 & 25.5 & sofa & 37.6 & 37.8
chair & 19.9 & 21.3 & train & 45.5 & 46.8
cow & 25.1 & 25.7 & tv & 43.4 & 43.5

Table 2: Average precision scores for DPM and N-DPM (using normalized gradients) on PASCAL VOC 2007.

5 Conclusion

Our research reinterprets the gradient variation in the viewpoint of the classifier. Unlike conventional approaches concentrating on computing resized images, our approach concentrates on decreasing the coverage of the classifier to enhance the focus of the classifier. We prove the existence of the gradient variation by formulating the ratio of gradient expectations between an original image and scaled images, then estimate a normalization function to eliminate the effect of this variation. Our calculations and experiments prove the validity of the gradient normalization function. The proposed method is not restricted to object-detection applications, but can be applied in many gradient-based studies with negligible cost of computing time. We will adopt our study to deep learning based features.

6 Acknowledgement

This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP)(2014-0-00059, Development of Predictive Visual Intelligence Technology), MSIP (Ministry of Science, ICT and Future Planning), Korea, under the “ICT Consilience Creative Program” (IITP-R0346-16-1007) supervised by the IITP, and MSIP(Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2017-2016-0-00464) supervised by the IITP.