Mid-level Deep Pattern Mining
Mid-level visual element discovery aims to find clusters of image patches that are both representative and discriminative. In this work, we study this problem from the prospective of pattern mining while relying on the recently popularized Convolutional Neural Networks (CNNs). Specifically, we find that for an image patch, activations extracted from the first fully-connected layer of CNNs have two appealing properties which enable its seamless integration with pattern mining. Patterns are then discovered from a large number of CNN activations of image patches through the well-known association rule mining. When we retrieve and visualize image patches with the same pattern, surprisingly, they are not only visually similar but also semantically consistent. We apply our approach to scene and object classification tasks, and demonstrate that our approach outperforms all previous works on mid-level visual element discovery by a sizeable margin with far fewer elements being used. Our approach also outperforms or matches recent works using CNN for these tasks. Source code of the complete system is available online.READ FULL TEXT VIEW PDF
Mid-level Deep Pattern Mining
Mid-level Deep Pattern Mining
Mid-level visual elements, which are clusters of image patches rich in semantic meaning, were proposed by Singh et al.  with the aim of replacing low-level visual words (play the game in Fig. 1 and then check your answers below111Answer key: 1.aeroplane, 2.train, 3.cow, 4.motorbike, 5.bike, 6.sofa.). In this pioneering work, mid-level visual elements must meet two requirements, that is, representativeness and discriminativeness. Representativeness means mid-level visual elements should frequently occur in the target category, while discriminativeness implies that they should be visually discriminative against the natural world. The discovery of mid-level visual elements has boosted performance in a variety of vision tasks, including image classification [1, 2, 3, 4, 5, 6, 7], action recognition [8, 9], discovering stylistic elements [10, 11]
, geometry estimation and 2D-3D alignment [13, 14].
Originally motivated by market basket analysis, association rule mining is a well-known pattern mining algorithm that aims to discover a collection of if-then rules (i.e., association rules) from a large number of records named transactions. The main advantage of association rule mining lies in its ability to process “Big Data”: association rules can be discovered from millions of transactions efficiently. In the context of mid-level visual element discovery, as noted by Doersch et al. , finding discriminative patches usually involves searching through tens of thousands of patches, which is a bottleneck in recent works. In this sense, if appropriately used, association rule mining can be an appealing solution for handling “Big Data” in mid-level visual element discovery.
In this paper, building on the well-known association rule mining, we propose a pattern mining algorithm, Mid-level Deep Pattern Mining (MDPM), to study the problem of mid-level visual element discovery. This approach is particularly appealing because the specific properties of activation extracted from the fully-connected layer of a Convolutional Neural Network (CNN) allow them to be seamlessly integrated with association rule mining, which enables the discovery of category-specific patterns from a large number of image patches . Moreover, we find that two requirements of mid-level visual elements, representativeness and discriminativeness, can be easily fulfilled by association rule mining. When we visualize image patches with the same pattern (mid-level visual element in our scenario), it turns out that they are not only visually similar, but also semantically consistent (see Fig. 1).
To our knowledge, hand-crafted features, typically HOG , are used as feature descriptors for image patches in all current methods of mid-level visual element discovery. Vondrick et al. , however, have illustrated the limitations of HOG, implying that HOG may be too lossy a descriptor to achieve high recognition performance. In this sense, an extra bonus of our formulation lies in that we are now relying on CNN activations, a more appealing alternative than the hand-crafted HOG, as indicated in recent works [17, 18, 19, 20, 21, 22].
One issue must be considered before using any pattern mining algorithms, that is, they have two strict requirements for the transactions that they can process (Sec. 4.1). Thanks to two appealing properties of CNN activations (Sec. 3), these two requirements are effortlessly fulfilled in the proposed MDPM algorithm (Sec. 4).
To show the effectiveness of the proposed MDPM algorithm, we apply it to scene and generic object classification tasks (Sec. 5). Specifically, after retrieving visual elements from the discovered patterns, we train element detectors and generate new feature representations using these detectors. We demonstrate that we achieve classification results which not only outperform all current methods in mid-level visual element discovery by a noticeable margin with far fewer elements used, but also outperform or match the performance of state-of-the-art using CNNs for the same task.
In summary, our contributions are twofold.
We formulate mid-level visual element discovery from the prospective of pattern mining, finding that its two requirements, representativeness and discriminativeness, can be easily fulfilled by the well-known association rule mining algorithm.
We present two properties of CNN activations that allow seamless integration with association rule mining, avoiding the limitations of pattern mining algorithms.
The source code of the complete system is available at http://goo.gl/u5q8ZX.
reference model which was pre-trained on the ImageNet. More specially, given a mean-subtracted patch or image, we resize it to the size of and pass it to the caffe CNN. We extract the non-negative -dimensional CNN activations from the sixth layer fc
Mid-level visual element discovery.
Mid-level visual element discovery aims to discover clusters of image patches that are both representative and discriminative.
Recent studies on this topic have shown that mid-level visual elements can be used for image classification [1, 2, 3, 4, 5, 6, 7].
The process typically proceeds as follows. Firstly, mining visual elements and training element detectors. Secondly, generate new feature
representations using these element detectors.
Various methods have been proposed for the first step, such as cross validation training of element detectors , ranking and selecting exemplar detectors on the validation set  and discriminative mode seeking .
Convolutional Neural Networks. Although proposed by LeCun et al.  for solving the handwritten digit recognition in the late ’80s, CNNs have regained popularity having shown very promising result in the ILSVRC challenge . In the benchmark CNN architecture of Krizhevsky et al. 
, raw pixels first pass through five convolutional layers where responses of filters are max-pooled in sequence, before producing an activation ofdimensions at each of the two fully-connected layers. Recent studies [18, 27] have demonstrated that the -dimensional activation extracted from the fully-connected layer is an excellent representation for general recognition tasks.
In this section we provide a detailed analysis of the performance of CNN activations on the MIT Indoor dataset , from which we are able to conclude two important properties thereof. These two properties are key ingredients in the proposed MDPM algorithm in Sec. 4.
We first sample
patches with a stride ofpixels from each image. Then, for each image patch, we extract its -dimensional CNN activation using caffe. To generate final the feature representation for an image, we consider two methods as follows.
CNN-Sparsified. For each -dimensional CNN activation of an image patch, keep only the largest dimensions (in terms of magnitude) and set the remaining elements to zero. The final feature representation for an image is the outcome of max pooling on the revised CNN activations.
CNN-Binarized. For each -dimensional CNN activation of an image patch, set its
largest dimensions to one and the remaining elements to zero. The final feature representation for an image is the outcome of max pooling on these binarized CNN activations.
For each of the above cases, we train a multi-class linear SVM classifier in the one-vs-all fashion. The classification accuracy achieved by each of the above strategies for a range ofvalues is summarized in Table 1. In comparison, our baseline feature, which is the outcome of max pooling on CNN activations of all patches in an image, gives an accuracy of . Analysing the results in Table 1 leads to 2 conclusions:
Sparse. Comparing the performance of “CNN-Sparsified” with that of the baseline feature, it is clear that accuracy is reasonably high when using sparsified CNN activations with only non-zero magnitudes out of .
Binary. Comparing “CNN-Binarized” case with its “CNN-Sparsified” counterpart, it is observed that CNN activation does not suffer from binarization when is small, accuracy even increases slightly instead.
Conclusion. The above two properties imply that for an image patch, the discriminative information within its CNN activation is mostly embedded in the dimension indices of the largest magnitudes.
In this section, we give details of the proposed MDPM algorithm, an overview of which is provided in Fig. 2. We start by introducing some important concepts and terminology in pattern mining.
Frequent itemset. Let denote a set of items. A transaction is a subset of , i.e., . We also define a transaction database containing transactions ( is usually very large). Given , a subset of , we are interested in the fraction of transactions which contain . The support value of reflects this quantity:
where measures cardinality.
is called a frequent itemset when is larger than a predefined threshold.
Association rule. An association rule implies a relationship between and an item . We are interested in how likely it is that is present in the transactions which contain within . In a typical pattern mining application this might be taken to imply that customers who bought items in are also likely to buy item , for instance. The confidence of an association rule
can be taken to reflect this probability:
In practice, we are interested in “good” rules, that is, the confidence of these rules should be reasonably high.
Two strict requirements of pattern mining. As noted in , there are two strict requirements that must be met if we use pattern mining algorithms.
Each transaction can only have a small number of items, as the potential search space grows exponentially with the number of items in each transaction.
What is recorded in a transaction is a set of integers, as opposed to the real-valued elements of most vision features (such as SIFT and HOG for example).
Transactions must be created before any pattern mining algorithms can process. In our work, as we aim to discover patterns from image patches through pattern mining, an image patch is utilized to create one transaction.
The most critical question now is how to transform an image patch into a transaction while maintaining most of its discriminative information. In this work, we rely on its CNN activation which has two appealing properties (Sec. 3). More specifically, we treat each dimension index of CNN activation as an item ( items in total). Thanks to two properties in Sec. 3, each transaction is then represented by the dimension indices of the k largest magnitudes of the corresponding image patch.
), each integer vector transaction calculated as described contains onlyitems, and can be set to be small ( in all of our experiments).
Following the work of , at the end of each transaction, we add a () item if the corresponding image patch comes from the target category (natural world). Therefore, each complete transaction has items, consisting of indices of largest CNN magnitudes plus one class label. For example, if we set equals three, given a CNN activation of an image patch from the target category which has largest magnitudes in its , and dimension, the corresponding transaction will be .
In practice, we first sample a large number of patches from images in both the target category and the natural world. After extracting their CNN activations from caffe, a transaction database is created containing a large number of transactions created using the proposed technique above. Note that the class labels, and , are represented by and respectively in the transactions.
Given the transaction database , we use the Aprior algorithm  to discover a set of patterns through association rule mining. Each pattern must satisfy the following two criteria:
where and are thresholds for the support value and confidence.
Representativeness and discriminativeness. We now demonstrate how association rule mining implicitly satisfies the two requirements of mid-level visual element discovery, i.e., representativeness and discriminativeness. Specifically, based on Eq. 3 and Eq. 4, we are able to rewrite Eq. 2 thus
where measures the fraction of pattern found in transactions of the target category among all the transactions. Therefore, values of and above their thresholds ensure that pattern is found frequently in the target category, akin to the representativeness requirement. A high value of (Eq. 4) will also ensure that pattern is more likely to be found in the target category rather than the natural world, reflecting the discriminativeness requirement.
We now apply our MDPM algorithm to the image classification task. To discover patterns from a particular class, this class is treated as the target category while all other classes in the dataset are treated as the natural world. Note that only training images are used to discover patterns.
Given the pattern set discovered in Sec. 4, finding mid-level visual elements is straightforward. A mid-level visual element contains the image patches sharing the same pattern , which can be retrieved efficiently through an inverted index. This process gives rise to a set of mid-level visual elements (i.e. ).
We provide a visualization of some of the visual elements discovered by the MDPM in Fig. 3. It is clear that image patches in each visual element are visually similar and depict similar semantic concepts. An interesting observation is that visual elements discovered by the MDPM are invariant to horizontal flipping.
We note that patches belonging to different elements may overlap or describe the same visual concept. To remove this redundancy, we propose to merge elements in an iterative procedure while training element detectors.
Algorithm 1 summarizes the proposed ensemble merging procedure. At each iteration, we greedily merge overlapping mid-level elements and train the corresponding detector through the MergingTrain function in Algorithm 1. In the MergingTrain function, we begin by selecting the element covering the maximum number of training images, followed by training a Linear Discriminant Analysis (LDA) detector . We then incrementally revise this detector. At each step, we run the current detector on the patches of all the remaining visual elements, and retrain it by augmenting the positive training set with positive detections. We repeat this iterative procedure until no more elements can be added into the positive training set. The idea behind this process is using detection score as a similarity metric, much inspired by Exemplar SVM [37, 38].
The final output of the ensemble merging step is a clean set of visual elements and their corresponding detectors.
We can now use the learned element detectors to encode a new image. There is a computational cost, however, associated with applying each successive learned element, and particular elements may be more informative when applied to particular tasks. We thus now seek to identify those elements of most value to the task at hand.
In practice, we rank all of the elements in a class based on the number of training images that they cover. We then select the detectors corresponding to the elements which cover the maximum number of training images, akin to “maximizing coverage” in . This process is then repeated such that the same number of detectors are selected from each class and stacked together.
To generate a feature representation for a new image, we evaluate all of the selected detectors at three scales. For each scale, we take the max score per detector per region encoded in a -level ( and ) spatial pyramid. The final feature vector is the outcome of max pooling on the features from all three scales.
We test our algorithm on two image classification tasks, scene classification and generic object classification.
Implementation details. For each image, we resize its smaller dimension to while maintaining its aspect ratio, then we sample patches with a stride of pixels, and calculate the CNN activations using caffe. Because the number of patches sampled varies in different datasets, two parameters and in the association rule mining (Sec. 4.3) are set according to each dataset with the goal that at least patterns are discovered for each category. The merging threshold in Algorithm 1 (Sec. 5.2) is set as . For training classifiers, we use the Liblinear toolbox  with -fold cross validation.
The MIT Indoor dataset  contains 67 classes of indoors scenes. Verified by recent works on mid-level visual element discovery, indoor scenes are better characterized by the unique objects that they contain (e.g., computers are more likely to be found in computer room rather than laundry). We follow the standard partition of , i.e., approximately 80 training and 20 test images per class. and are set as and respectively.
Comparison to methods using mid-level visual elements. We first compare our approach against recent works on mid-level visual element discovery (See Table 2). Using only visual elements per class, our approach yields an accuracy of %. Increasing the number of visual elements to makes our performance increases to %, outperforming all previous mid-level visual element discovery methods by a sizeable margin. As shown in Table 2, compared with the work of Doersch et al.  which achieved best performance among previous mid-level visual elements algorithms, our approach uses an order of magnitude fewer elements than  ( vs. ) while outperforming it by over percent in accuracy. Also, our approach surpasses a very recent work RFDC  by over in the same setting ( elements per class). Thanks to the fact that CNN activations from caffe are invariant to horizontal flipping , we avoid adding right-left flipped images (c.f. [3, 2]).
Ablation study. As we are the first to use CNN activation for mid-level visual elements discovery, a natural question is that what is the performance of previous works if CNN activation is adopted? To answer this question, we implement two baselines using CNN activations as image patch representations. “LDA-Retrained” initially trains Exemplar LDA using a sampled patch and then re-trains the detector times by adding top- positive detections as positive training samples at each iteration. This is quite similar to the “Expansion” step of 
. Another baseline “LDA-KNN” retrieves 5-nearest neighbors of an image patch and trains a LDA detector using the retrieved patches (including itself) as positive training data. For both baselines, discriminative detectors are selected based on the Entropy-Rank Curves proposed in. As shown in Table 2, we report much better results than both baselines in the same setting, which verifies that the proposed MDPM is an essential step for achieving good performance when using CNN activations for mid-level visual element discovery.
|CNN-G||CNN for whole image|
|CNN-SVM ||OverFeat toolbox|
|VLAD level2 ||VLAD encoding|
|VLAD level3 ||VLAD encoding|
|VLAD level1&2 ||concatenation|
|CNN-jittered ||jittered CNN|
|CNN-finetune ||fine-tuned CNN|
|Places-CNN ||Places dataset used|
|SCFVC ||new Fisher encoding|
|CL-45C ||cross-layer pooling|
|Ours||MDPM ( element)|
Comparison to methods using CNN activation. In Table 3, we compare our approach to others in which CNN activation is involved. Our baseline method, using fc CNN activations extracted from the whole image, gives an accuracy of . Our approach (based on the MDPM algorithm) achieves accuracy, which is a significant improvement over all the baselines. Our work is most closely related to MOP-CNN  and SCFVC  in the sense that all these works rely on off-the-shelf CNN activations of image patches. To encode these local CNN activations, MOP-CNN  rely on the classical VLAD encoding, whereas SCFVC  is a new Fisher vector encoding strategy for encoding high-dimensional local features. Our encoding method, which is based on the discovered visual elements, not only outperforms MOP-CNN on and patches by a noticeable margin ( vs. and vs. ), also slightly bypasses that of SCFVC ( vs. ).
Fine-tuning has been shown to be beneficial when transferring pre-trained CNN models on the ImageNet to another dataset [19, 20, 21]. Jittered CNN features (e.g., crops, flips) extracted from the fine-tuned network of Azizpour et al.  offer % accuracy, which is still below ours.
After concatenating with CNN activations of the whole image (both normalized to unit norm), our performance increases to , outperforming all previous works using CNN on this dataset.
Computational complexity. Given pre-computed CNN activation from about 0.2 million patches, the baseline method “LDA-Retrained” takes about 9 hours to find visual elements in a class. However, our approach only takes about 3 minutes (writing transaction file and association rule mining) to discover representative and discriminative rules.
Visualization. We visualize some visual elements discovered and their firings on test images in Fig. 4. It is intuitive that the discovered mid-level visual elements capture the patterns which are often repeated within a scene category. Some of the mid-level visual elements refer to frequently occurring objects configurations, e.g., the configuration between table and chair in the meeting room category. Some instead capture a particular object in the scene, such as the baby cot in the nursery and screen in the movie theater.
|CNN-G||CNN for whole image|
|CNN-SVM ||OverFeat toolbox|
|PRE-1000C ||bounding box used|
|CNN-jittered ||jittered CNN|
|SCFVC ||new Fisher encoding|
|CL-45C ||cross-layer pooling|
|Ours||MDPM (50 elements)|
The Pascal VOC 2007 dataset  contains images from 20 object classes. For this dataset, training and validation sets are utilized to discover patterns and training final classifiers. The parameters and are set as and respectively.
Comparison to methods using CNN activation. Table 4 reports our results along with those of other recent methods based on CNN activation. On this dataset, when using visual elements per class, the proposed method achieves mean average precision (mAP), significantly outperforming the baseline that using CNN activations as a global feature (), as well as its average pooling and max pooling counterparts. Compared with state-of-the-arts, Oquab et al.  fine-tune the pre-trained network on the ImageNet, however, it relies on bounding box annotation which makes the task easier, so it is not surprising that it outperforms ours which does not use bounding box annotation. The best result on PASCAL VOC 2007 () is achieved when the proposed MDPM feature and global CNN activation are concatenated, marginally outperforming fine-tuning with bounding box annotation . This is despite the fact that the bounding box annotations constitute extra information which is time-consuming to gather.
Visualization. We visualize some visual elements discovered and their firings on the test images of the VOC 2007 dataset in Fig. 5. It is clear that the discovered mid-level visual elements capture some discriminative parts of object (e.g., dog faces). It is worth noting here that “parts” have been shown to be extremely important for state-of-the-art object recognition, such as Deformable Part Models  and Poselets .
We have addressed mid-level visual element discovery from the perspective of pattern mining. In the process we have shown not only that is it profitable to apply pattern mining technique to mid-level visual element discovery, but also that, from the right perspective, CNN activations are particularly well suited to the task. This is significant because CNNs underpin many current state-of-the-art methods in vision, and pattern mining underpins significant elements of the state-of-the-art in Big Data processing.
“Food-101 – mining discriminative components with random forests,”in Proc. Eur. Conf. Comp. Vis., 2014, pp. 446–461.
“Data-driven visual similarity for cross-domain image matching,”Ann. ACM SIGIR Conf., vol. 30, no. 6, pp. 154, 2011.
“Learning deep features for scene recognition using places database,”in Proc. Adv. Neural Inf. Process. Syst., 2014.