SVS-JOIN: Efficient Spatial Visual Similarity Join over Multimedia Data

10/01/2018 ∙ by Chengyuan Zhang, et al. ∙ Central South University 0

In the big data era, massive amount of multimedia data with geo-tags has been generated and collected by mobile smart devices equipped with mobile communications module and position sensor module. This trend has put forward higher request on large-scale of geo-multimedia data retrieval. Spatial similarity join is one of the important problem in the area of spatial database. Previous works focused on textual document with geo-tags, rather than geo-multimedia data such as geo-images. In this paper, we study a novel search problem named spatial visual similarity join (SVS-JOIN for short), which aims to find similar geo-image pairs in both the aspects of geo-location and visual content. We propose the definition of SVS-JOIN at the first time and present how to measure geographical similarity and visual similarity. Then we introduce a baseline inspired by the method for textual similarity join and a extension named SVS-JOIN_G which applies spatial grid strategy to improve the efficiency. To further improve the performance of search, we develop a novel approach called SVS-JOIN_Q which utilizes a quadtree and a global inverted index. Experimental evaluations on real geo-image datasets demonstrate that our solution has a really high performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the big data era, Internet techniques and claud services such as online social networking services, search engine and multimedia sharing services are developing rapidly, which generate and storage large-scale of multimedia data DBLP:conf/cikm/WangLZ13 ; InfYang13 ; KAISYang16 ; DBLP:conf/mm/WangLWZZ14 ; DBLP:conf/mm/WangLWZ15 ; DBLP:journals/tip/WangLWZ17 , e.g., text, image, audio and video. For example, More and more people use online social networking services such as Facebook (https://facebook.com/), Twitter (http://www.twitter.com/), Linkedin (https://www.linkedin.com/), Weibo (https://weibo.com/), etc. to making friends and sharing their hobbies or work information by uploading texts, images, or short videos. On the other hand, for multimedia data DBLP:journals/corr/abs-1804-11013 sharing services, such as Flickr(https://www.flickr.com/), the most famous photo sharing web site, more than 3.5 million new images uploaded to it everyday in March 2013. In addition, every minute there are 100 hours of videos are uploaded to YouTube (https://www.youtube.com/), the largest video sharing service all around the world, and more than 2 billion videos totally stored in this platform by the end of 2013. In China, IQIYI (http://www.iqiyi.com/) is the largest video sharing web site. the total watch time monthly of this online video service exceeded 42 billion minutes. These multimedia web services not only provide great convenience for our daily life, but creates possibilities for the generation, collection, storage and sharing of large-scale multimedia data DBLP:conf/sigir/WangLWZZ15 ; DBLP:journals/tip/WangLWZZH15 ; DBLP:conf/ijcai/WangZWLFP16 . Moreover, this trend has put forward greater challenges for massive multimedia data retrieval DBLP:journals/cviu/WuWGHL18 ; DBLP:journals/corr/abs-1708-02288 ; DBLP:conf/pakdd/WangLZW14 .

Mobile smart devices equipped with mobile communications module (e.g., WiFi and 4G module) and position sensor module (e.g., GPS-Module) such as smartphones and tablets collective huge amounts of multimedia data DBLP:journals/pr/WuWGL18 ; DBLP:journals/tnn/WangZWLZ17 with geo-tags. For example, users can take photos or videos NNLS2018 with the geo-location information of the shoot place. Many mobile applications such as WeiChat, Twitter and Instagram support uploading and sharing of images and text with geo-tags. Other location-based services such as Google Places, Yahoo!Local, and Dianping provide the query services for geo-multimedia data by taking into account both geographical proximity and multimedia data similarity.

Figure 1: An example of spatial visual similarity join

Motivation. Spatial textual search problem has become a hot spot in the area of spatial database and information retrieval due to the wide application of mobile devices and location-based services. Many spatial indexing techniques have been developed like R-Tree DBLP:conf/sigmod/Guttman84 , R-Tree DBLP:conf/sigmod/BeckmannKSS90 , IL-quadtree DBLP:journals/tkde/ZhangZZL16 , KR-Tree DBLP:conf/ssdbm/HariharanHLM07 , IR-Tree DBLP:conf/icde/FelipeHR08 etc. Deng et al. DBLP:journals/tkde/DengLLZ15 studied a generic version of closest keywords search called best keyword Cover. Cao et al. DBLP:conf/sigmod/CaoCJO11 proposed the problem of collective spatial keyword querying, Fan et al. Fan2012Seal studied the problem of spatio-textual similarity search for regions of interests query. However, these researches just only consider the textual data such as keywords and spatial information, they do not take into account other multimedia data mentioned above, like images. One of the important problem of spatial keyword search is named spatial textual similarity join, which is studied by many researches. It is to find out the spatial textual object pair which are similar in both aspects of geo-location and textual content. However, there is no work pay attention to multimedia data with geo-tags for this task. In this paper, we aim to investigate a novel paradigm named spatial visual similarity join and develop a efficient solution to overcome the challenge of geo-multimedia query. We in the first time propose a novel geo-multimedia query called region of visual interest query. Figure. 1 is an simple but intuitive example to describe this problem.

Example 1

As illustrated in Fig. 1, the spatial visual similarity join can be applied in friends recommendation services on an online social networking platform. According to the geo-images posted by the users, the social networking system can find the similar geo-images in both aspects of geo-location and visual content. It’s easy to understand that two people may make friend if they have the same hobbies and their position is very close. As illustrated in Fig. 1, there are four similar geo-image pair are search out. For pair 1, two users who took the photo about basketball at two close places are very likely to become good friends.

To our best knowledge, we are the first to propose the problem of spatial visual similarity join. To solve this problem effectively and efficiently, we present the definition of spatial visual similarity join at the first time and the relevant notions. Besides, we introduce how to measure geographical similarity and visual similarity to find the similar geo-image pairs. A baseline named SVS-JOIN inspired by the techniques used in textual similarity join is introduced. Based on it, we propose an extension of SVS-JOIN called SVS-JOIN which uses spatial grid partition strategy to improve the efficiency. In order to furher improve the search performance, we develop a novel method named SVS-JOIN, which is based on quadtree technique to partition the spatial region of input. A global inverted index is applied to enhance the search efficiency.

Contributions. Our main contributions can be summarized as follows:

  • To the best of our knowledge, we are the first to study the problem of spatial visual similarity join. Firstly we propose the definition of geo-image and spatial visual similarity join and relevant notions. The visual similarity function and geographical similarity function are designed for similar geo-image pair search.

  • We introduce a baseline named SVS-JOIN inspired by the techniques used for the problem of textual similarity join. An extension of SVS-JOIN called SVS-JOIN is developed, which has higher efficiency.

  • To further improve the searching performance , we present a novel method named SVS-JOIN based on quadtree partition technique and a global inverted index.

  • We have conducted extensive experiments on real geo-image dataset. Experimental results demonstrate that our solution has really high performance.

Roadmap. In the remainder of this paper, In Section 2 we present the related works about content-based image retrieval, spatial textual search and set similarity joins, which are related to our work. In Section 3 we propose the definition of spatial visual similarity join and related conceptions. We introduce a baseline and an extension named SVS-JOIN and SVS-JOIN respectively in Section 4. In Section 5, we propose a novel technique named SVS-JOIN which utilize quadtree technique and a global inverted indexing structure to solve the spatial visual similarity join efficiently. In Section 6 we present the experiment results. Finally, we conclude the paper in Section 7.

2 Related Work

In this section, we introduce the previous studies of content-based image retrieval, spatial textual search, spatial queries on road networks and set similarity joins, which are relevant to this work. To the best of our knowledge, no priori work on this problem.

Content-Based Image Retrieval. As one of the most important problems, content-based image retrieval (CBIR for short) has gained much attention of many researchers due to benefit it provides for various multimedia analysis tasks TC2018 ; DBLP:journals/pr/WuWLG18 ; DBLP:journals/ivc/WuW17

. Scale-Invariant Features Transform (SIFT for short) is one of the classical methods for visual feature extraction, which is proposed by Lowe 

DBLP:conf/iccv/Lowe99

. It transforms an image into a collection of local feature vectors. These features are invariant to translation, scaling, rotation, and partially invariant to illumination changes. In another work of Lowe 

DBLP:journals/ijcv/Lowe04 , Four main stages, i.e., (I) scale-space extrema detection, (II) keypoint localization, (III) orientation assignment, and (IV) keypoint descriptor, to generate the image features are proposed. Bag-of-Visual-Words (BoVW for short) DBLP:conf/iccv/SivicZ03 is a traditional image representation model, which is to improve markedly the performance of image feature matching. This type of model was initially used to model texts and called Bag-of-Word (BoW for short). Specifically, for a textual document, this technique treats it as a collection of words and ignores the words order, syntax, etc. In this collection, the appearance of each word is independent. For image retrieval problem, it generates visual words by utilizing -means method to cluster SIFT features. In recent years, lots of works have been proposed using SIFT and BoVW to overcome the challenges. For example, Mortensen et al. DBLP:conf/cvpr/MortensenDS05 proposed a feature descriptor which augments SIFT with a global context vector that adds curvilinear shape information from a much larger neighborhood. This technique can improve the accuracy of image feature matching. Ke et al. DBLP:conf/cvpr/KeS04

proposed a descriptor based on SIFT to encode the salient aspects of the image gradient in the neighborhood of feature point. Rather than using smoothed weighted histograms of SIFT, their method utilized Principal Components Analysis (PCA for short) to the normalized gradient patch. Su et al. 

Su2017MBR presented horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT to solve the problem of image matching. In order to enhance the performance of matching, a fast matching algorithm is developed, which includes a coarse-to-fine two-step matching strategy in addition to two similarity measures. To gain sufficient distinctiveness and robustness in the task of image feature matching, Li et al. DBLP:journals/prl/LiM09 designed a novel framework based on SIFT for feature descriptor by integrating color and global information . Liao et al. DBLP:journals/prl/LiaoLH13 presents an improvement to the SIFT descriptor for image matching and retrieval, which includes normalizing elliptical neighboring region, transforming to affine scale-space, improving the SIFT descriptor with polar histogram orientation bin, and integrating the mirror reflection invariant. Zhu et al Zhu2013Image proposed an image registration algorithm called BP-SIFT by using belief propagation, which have significant improvement for the problem of keypoint matching.

Originated from text retrieval and mining, BoVW is an important visual representation methods in multimedia retrieval and computer vision 

DBLP:journals/tnn/WangZWLZ17 ; DBLP:journals/corr/abs-1804-11013 ; LINYANGARXIV ; DBLP:conf/mm/WuWS13 . Escalante et al. DBLP:journals/nca/EscalantePEBMM17

presented an evolutionary algorithm to implement an automatically learning weighting schemes of this model for computer vision tasks.In order to improve the performance of construction of visual words dictionary in the large image database, Dimitrovski et al. 

DBLP:journals/isci/DimitrovskiKLD16 propose to use predictive clustering trees(PCTs) to improve the BoVW image retrieval, which can be constructed and executed efficiently and have good predictive performance. For the task of multi-script document retrieval, Mandal et al. DBLP:journals/corr/abs-1807-06772 proposed a patch-based framework by using SIFT descriptor and bag-of-visual-words model to improve the performance of handwritten signature detection. Santos et al. DBLP:journals/mta/SantosMST17 proposed a novel method based on S-BoVW paradigm that considers information of texture to generate textual signatures of image blocks. Moreover, they presented a strategy which represents image blocks with words which are generated based on both color as well as texture information. For the task of Medical image retrieval, Zhang et al. DBLP:journals/ijon/ZhangSCHLPKFFC16 proposed a novel medical image retrieval approach named PD-LST retrieval, which is based on BoVW to identify discriminative characteristics between different medical images with Pruned Dictionary by using Latent Semantic Topic description. By utilizing BoVW representation, Karakasis et al. Karakasis2015Image

proposed a novel framework for the task of image retrieval, which uses affine image moment invariants as descriptors of local image areas.

It is no doubt that these solutions improve the performance of image retrieval and visual feature matching significantly based on SIFT and BoVW model. However, these works cannot solve the problem of geo-multimedia data retrieval as they have no effective processing for geographical distance measurement.

Spatial Textual Search. Due to the collection and storage of large scale of spatial textual data, there has been increasing interest on spatial textual search problem in the community of spatial database. Spatial textual search DBLP:conf/er/CaoCCJQSWY12 ; DBLP:journals/tkde/ZhangZZL16 aims to find out textual objects or documents with geo-tags by textual similarity and geographical proximity. For top- spatial keyword queries, Rocha-Junior et al. DBLP:conf/ssd/RochaGJN11 propose a novel spatial index named Spatial Inverted Index (S2I for short) to enhance the efficiency of search. This technique maps each term in the vocabulary into a distinct aR-tree or block that stores all objects with the given term. Li et al. DBLP:journals/tkde/LiLZLLW11 proposed an efficient indexing structure named IR-tree, which indexes both the textual and spatial contents of documents that enables spatial pruning and textual filtering to be performed at the same time. Furthermore, they developed a top- document search algorithm. Zhang et al DBLP:conf/edbt/ZhangTT13 presented a scalable integrated inverted index called I which uses the Quadtree structure to hierarchically partition the data space into cells. In order to improve the efficiency of retrieval, they designed a novel storage mechanism and preserve additional summary information to facilitate pruning, which outperform IR-tree and S2I in the aspects of construction time, index storage cost, updating speed and scalability. Zhang et al. DBLP:conf/icde/ZhangZZL13 proposed a novel index structure named inverted linear quadtree (IL-Quadtree for short) which is based on the inverted index and the linear quadtree. Moreover, they designed a novel algorithm to improve the performance of query. Li et al. Li2012Keyword presented a novel spatial textual indexing technique named BR-tree to solve the problem of keyword-based -nearest neighbor (KN for short) queries, which utilizes R-tree to maintain the spatial information of objects and use the B-tree to main the terms in the objects. Fan et al. Fan2012Seal studied a novel search problem named spatio-textual similarity search which is to find the similar ROIs by considering spatial overlap and textual similarity. To improve the system performance, they proposed grid-based signatures and threshold-aware pruning techniques. Zhang et al DBLP:conf/sigir/ZhangCT14 proposed a novel method which is based on modeling the spatial keyword search problem as a top- aggregation problem. They developed a rank-aware CA algorithm which works well on inverted lists sorted by textual relevance and spatial curving order. Lin et al. DBLP:journals/tkde/LinXH15 propose a novel spatial textual query paradigm called reverse keyword search for spatio-textual top- queries (RSTQ for short). They developed a novel hybrid indexing structure named KcR-tree to store and summarize the spatial and textual information of objects and proposed three query optimization techniques, i.e., KcR-tree, lazy upper-bound updating, and keyword set filtering to improve the performance of search. For the problem of continuous spatial-keyword queries over streaming data, Wang et al. DBLP:conf/icde/WangZZLW15 proposed a highly efficient indexing technique named AP-Tree which adaptively groups registered queries utilizing keywords and spatial partitions based on a cost model and indexes ordered keyword combinations. Besides, they devised an index construction algorithm that seamlessly and effectively combines keyword and spatial partitions. Zheng et al. DBLP:conf/icde/ZhengSZSXLZ15 investigated an other type of spatial textual query, namely interactive top- spatial keyword queries. They introduced a three-phase solution: the first phase is to quickly narrows the search space from the entire database. The second phase is called interactive phase, which is to develop several strategies to select a subset of candidates and present them to users at each round. The third phase is to terminate the interaction automatically. Zhang et al. DBLP:conf/icde/ZhangCMTK09 introduced -closest keywords (CK for short) query which aims to search out the spatially closest tuples which match user-specified keywords. To solve this problem more efficiently, they developed a novel spatial index called the bR-tree extending from the R-tree and proposed a priori-based search strategies to effectively reduce the search space. Guo et al. DBLP:conf/sigmod/GuoCC15 proposed another solution to solve the CK search problem. They devised a novel greedy algorithm named that has an approximation ratio of 2 and in addition, they developed another two approximation algorithms called and respectively to improve the efficiency. Besides, a novel exact algorithm utilizing is introduced to reduce the search space considerably.

The researches aforementioned focused on textual search based on Euclidean distance, they do not consider the queries on road network which is a is a more realistic situation. On the other hand, these solutions cannot be applied in the problem of multimedia retrieval.

Spatial Queries on Road Networks In order to better simulate real life situations, more and more researchers are starting to pay attention to the problem of spatial queries on road networks. For example, Lee et al. DBLP:conf/edbt/LeeLZ09 presented a general framework named ROAD to evaluate Location-Dependent Spatial Queries (LDSQ)s on road networks. Li et al. DBLP:conf/dasfaa/LiGZ14

studied the problem of range-constrained spatial keyword query on road networks and proposed three approaches, i.e., expansion-based approach (EA), Euclidean heuristic approach (EHA) and Rnet Hierarchy based approach (RHA). Zhang et al. 

DBLP:conf/edbt/ZhangZZLCW14 presented a signature-based inverted indexing technique and an efficient diversified spatial keyword search algorithm to solve the problem of diversified spatial keyword search on road networks. Rocha-Junior et al. DBLP:conf/edbt/Rocha-JuniorN12 at first time studied the problem of top- spatial keyword queries on road networks. They devised novel indexing structures and algorithms to deal with this problem efficiently. For the problem of collective spatial keyword query (CSKQ for short) processing on road networks. Gao et al. DBLP:journals/tits/GaoZZC16 proved that it is a NP-complete at first time and designed two approximate algorithms with provable approximation bounds and one exact algorithm. For nearest neighbor (NN) query, Zhong et al. DBLP:journals/tkde/ZhongLTZG15 presented a novel indexing structure named G-tree which is a height-balanced and scalable index inspired by R-tree, and then developed efficient search algorithms for SPSP, NN and kN query. Abeywickrama et al. Abeywickrama2016k studied the problem of nearest neighbor query on road networks and presented efficient implementations of five of the most notable methods, i.e., IER, INE, Distance Browsing, ROAD and G-tree. For aggregate nearest neighbor (ANN for short) queries problem, Sun et al. Sun2015On proposed effective pruning strategies for sum aggregate function and max aggregate function, and developed an efficient NVD-based algorithm. In another work of Gao Gao2015Efficient , reverse top-k boolean spatial keyword queries on road networks was investigated. They formalized the RkBSK query and presented filter-and-refinement framework based algorithms to solve this problem.

Set Similarity Joins In recent years, lots of researchers paid attentions on the problem of spatial textual similarity join. A spatial similarity join of two spatial databases aims to search out pairs of objects that are simultaneously similar in both the aspects of textual and spatial. Ballesteros et al Ballesteros2011SpSJoin proposed an algorithm based on MapReduce parallel programming model to solve this problem on large-scale spatial databases.

3 Preliminaries

In this section, we propose the definition of region of visual interests (RoVI for short) at the first time, then present the notion of region of visual interests query (RoVIQ for short) and the similarity measurement. Besides, we review the techniques of image retrieval which is the base of our work. Table 1 summarizes the notations frequently used throughout this paper to facilitate the discussion.

Notation Definition
  A given database of geo-images
  The number of geo-images in
  The -th geo-tagged images
  The geographical information component of
  The visual component of
  A longitude
  A latitude
  A visual word
  A dataset of geo-images
  The -th geo-tagged images in dataset
  The weight of visual word
  The geographical similarity threshold
  The visual similarity threshold
  A result set
  The geographical similarity between and
  The visual similarity between and
  The Euclidean distance between and
  The maximum Euclidean distance between any two geo-iamges from and
  A global word set
  A inverted list
  The number of word overlap of with
  A global word ordering
  The prefix of
  A cell with id
  A node of quadtree with id
  A inverted index set
Table 1: The summary of notations

3.1 Problem Definition

Definition 1 (Geo-Image)

Let be an geo-image dataset, denotes the size of . A geo-image is defined as a tuple , where is the geographical information component which is generated from the geo-tag of this image. More specifically, it consists of longitude and latitude , i.e., . Another part, , is the visual information component which is a visual word set modeled by SIFT and BoVW. is the id of this geo-image.

Consider two geo-image datasets and , similar to spatial textual similarity join, a spatial visual similarity join aims to search out all pairs of geo-images from and respectively, which are similar enough in both aspects of geo-location and visual content. We introduce two thresholds, i.e., geographical similarity threshold and visual similarity threshold to measure these two similarity. Specifically, for each pair, both of the geographical similarity and visual similarity of this two geo-images are less than geographical similarity threshold and visual similarity threshold. In order to clarify our work more clearly, we proposed the definition of spatial visual similarity join as follows.

Definition 2 (Spatial Visual Similarity Join (SVS-JOIN))

Given two geo-image datasets and , geographical similarity threshold and visual similarity threshold . A spatial visual similarity join denoted as returns a set of geo-image pairs , in which each pair contains two highly similar geo-images in both aspect of geo-location and visual content, i.e.,

where and are the geographical similarity function and visual similarity function respectively.

To measure these two similarities quantitatively, in this work we utilize Euclidean distance measurement and Jaccard distance measurement to construct our functions, which are described formally as follows.

Definition 3 (geographical similarity function)

Given two geo-image datasets and , , the geographical similarity between and is measured by the following similarity function:

(1)

where is the Euclidean distance between and , which is measured by the following function:

(2)

the function is to return the maximum Euclidean distance between any two geo-images from and respectively, which is described in formal as follows:

(3)

where the function is to return the maximum element from a set.

Definition 4 (visual similarity function)

Given two geo-image datasets and , , the visual similarity between and is measured by the following similarity function:

(4)

where represents the weight of the visual word . In this work, we measure the weight of visual word by inverse document frequency .

Assumption. For ease of discussion, in this work we assume that . Our approach can be applied well in the case of . Therefore, for a geo-image dataset , we denote a spatial visual similarity join as .

Figure 2: An example of geo-images and spatial visual similarity join
Example 2

Consider a geo-image dataset , shown in Fig. 2. We set and , the return the set .

3.2 Visual Feature Extraction and Image Representation

Visual Feature Extraction. In this work, we utilize SIFT DBLP:journals/ijcv/Lowe04 which is a important traditional technique for the task of visual feature extraction. SIFT aims to transforms an image into a large set of local feature vectors. This feature vectors are invariant to image translation, scaling, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are four main phases of visual feature extraction by SIFT:

  • Scale-space extrema detection. The first phase is called scale-space extrema detection. This method searches all the images in scale space, which is to identify potential points of interest that are invariant to scale and orientation by utilizing difference-of-Gaussian (DoG) function shown as follows:

    (5)

    where represents an image and the operator represents convolution operation. is the Gaussian kernel function shown as follows:

    (6)
  • Keypoint localization. The second phrase is named keypoint localization, which is to select and localize the keypoints according to their stability. At each candidate location, a fine fitting model is used to determine the location and scale. In this process, there are two type of keypoints should be rejected: (I) low contrast feature points and (II) unstable edge response points. For the first type of keypoint , the offset is defined as , then the method uses the Taylor expansion of ,

    (7)

    as is the extremum of determined by taking the derivative of this function with respect to and setting it to zero,

    (8)

    thus we can use the function value at the extremum to reject unstable extrema with low contrast, shown as follows,

    (9)

    let be the threshold of contrast, if

    , then the keypoint is rejected. For type (II) keypoints, which are unstable edge response points, this method calculates the eigenvalue of

    Hessian matrix H, which is in direct proportion to the principal curvature of function ) of candidate feature points,

    (10)

    Then, we can compute the trace and determinant of H as follows, we set as the maximum eigenvalue of H, as the minimum eigenvalue of H, then

    (11)
    (12)

    let be the ratio of maximum eigenvalue to minimum eigenvalue, then

    (13)

    Thus, in order to detect whether the principal curvature is below a threshold value , we just to check the following inequality

    (14)

    If this inequality is established, the keypoint is rejected.

  • Orientation assignment. In the orientation assignment phrase, according to the local gradient direction of the image, each keypoint is assigned one or more directions, and all subsequent operations transform the direction, scale and position of the keypoints to provide invariance of features to these transformations. According to the gradient distribution characteristics of neighborhood pixels of keypoints to determine its direction parameters, and then using the gradient histogram of the image to obtain the stable direction of local structure of key points. The scale image of feature points can be calculated by

    (15)

    Thus, for each image , the gradient magnitude and the orientation can be calculated respectively by the following equations:

    (16)
    (17)

    After calculating the gradient direction, the orientation histogram can be generated from the gradient orientations of sample points within a region around the keypoint, which is used to calculate the gradient direction and amplitude of the pixels in the neighborhood of the feature points. The peaks of histogram are the dominant directions of keypoints.

  • Keypoint descriptor. In the last phase, the local gradients of the image are measured around each feature point at selected scales. And these gradients are transformed into a representation which allows for significant local shape distortion and illumination transformation.

4 The Baseline for Spatial Visual Similarity Joins

In this section, we propose the baseline for the problem of spatial visual similarity joins. Firstly, we introduce the state-of-the-art algorithm named PPJOIN DBLP:journals/tods/XiaoWLYW11 for textual similarity joins, which is utilized in our baseline. Then we present our baseline named SVS-JOIN in detail.

4.1 The Method for Textual Similarity Joins

Inverted Index Based Method. The traditional way to solve textual similarity joins efficiently is to construct a inverted index for the target object dataset , which associates each word in the global word set built beforehand to an objects inverted list . For each object , the inverted list of each word is traversed, where represents the keywords set of . Then we account the number of word overlap of with every object and save these numbers in a set . To facilitate exploration of this method, we denote as the number of word overlap of with . Apparently, the candidate pair set can be generated from directly. Then for all objects , if and , then the pair can be one of the results. To describe this process in a formal way, textual similarity joins by this method is to return a result set ,

(18)

where is the textual similarity function and is the textual similarity threshold.

Prefix Filtering Principle. When we use the inverted index based method, the inverted list will be quite long if the word is very frequent in the dataset. This become a major challenge as a lot of candidate pairs have to be generated in this situation. In order to reduce the size of the candidate set, an efficient method called prefix filtering principle was devised by DBLP:conf/icde/ChaudhuriGK06 . According to this technique, we generate a global word ordering which sorts keywords by word frequency in reverse order, and then for all objects , order the keywords in by . After the ordering, the prefix of is denoted as and the length of it is denoted as , which is measured by the following equation:

(19)

where represents the number of keywords in , is the textual similarity threshold. It is obvious that the length of prefix of an object is determined by the number of keywords contained by this object and the similarity threshold given in advance. According to this principle, we can get the following theorem:

Theorem 4.1

Given two objects and a textual similarity threshold , if  , then .

Obviously, the basic idea of Theorem 4.1 is that if the textual similarity between two objects is larger than a threshold, they should be share same keywords. Therefore, this theorem can be used to prune the candidate pair set effectively. Specifically, for each object , we just only to search out the keywords contained in the prefix of .

The PPJOIN Algorithm. PPJOIN is one of the efficient algorithm to solve the textual similarity joins problem, developed by Xiao et al. DBLP:journals/tods/XiaoWLYW11 .

0:   is an objects dataset sorted by a global ordering , is a textual similarity threshold.
0:   is the result pairs set.
1:  for each  do
2:      ;
3:  end for
4:  for each  do
5:      ;
6:      ;
7:      for  to  do
8:           the -th keyword in ;
9:          for  and  do
10:              if  then
11:                 ;
12:              else
13:                 ;
14:              end if
15:          end for
16:          if  then
17:              
18:          end if
19:      end for
20:      ;
21:  end for
22:  return ;
Algorithm 1 PPJOIN Algorithm

Algorithm demonstrates the pseudo-code of the PPJOIN algorithm. The input of this algorithm is a textual similarity threshold and an objects dataset which is sorted in ascending order of their size. At first it generates inverted list for each word in the global words set. Then for each object , probe prefix length and are calculated. Then from the first position to the -th position, it scans the prefix of and get the word in the prefix, and generates candidate pair. After that, the filter condition is used by this algorithm to filters the candidate pairs. The positional and suffix filter are operated by calling two produces and . The overlap will be added if the pair can qualify these filters. At last, this algorithm generates the result set by executing the procedure .

4.2 The Baselines for Spatial Visual Similarity Joins

In this subsection, we introduce the our baseline approach. Inspiring by the prefix filtering principle and the PPJOIN algorithm, we proposed a baseline called SVS-JOIN for the problem of spatial visual similarity joins. Different from the textual similarity joins, our method consider two aspects of information, i.e., geographical information and visual information. We set two thresholds and to deal with the measurement of geographical similarity and visual similarity. According to the definition of spatial visual similarity joins, we implement two procedures and to calculate these two similarities.

SVS-JOIN Algorithm. Algorithm 2 demonstrates the computing process in the form of pesudo-code. The input is a geo-image dataset and two thresholds and . Different from Algorithm 1, in Line 9, is called as a geographical similarity filter to prune the geo-image pairs whose spatial distance between two images is not short enough. Like PPJOIN, the procedure generates the final results set from the candidate set based on .

0:   is a geo-image dataset sorted by a global ordering , is a textual similarity threshold, is a geographical similarity threshold.
0:   is the result pairs set.
1:  for each  do
2:      ;
3:  end for
4:  for each  do
5:      ;
6:      ;
7:      for  to  do
8:           the -th visual word in ;
9:          for  and and  do
10:              if  then
11:                 ;
12:              else
13:                 ;
14:              end if
15:          end for
16:          if  then
17:              
18:          end if
19:      end for
20:      ;
21:  end for
22:  return ;
Algorithm 2 SVS-JOIN Algorithm

Although SVS-JOIN algorithm can effectively deal with the problem of spatial visual similarity joins, we can still improve the efficiency significantly. It is easily to know that we just consider the geo-image who satisfies the filter condition . Unfortunately, this is the main limitation. In other words, SVS-JOIN algorithm considers all the geo-images for each visual word which is contained in . To overcome this challenge, in the next part we present a grid based spatial partition strategy and develop a more efficient baseline named SVS-JOIN algorithm extending from SVS-JOIN.

Spatial Grid. For the task of spatial visual similarity joins, we propose a grid based spatial partition strategy named spatial grid to improve the performance of algorithm. This strategy is to model the two-dimensional spatial area of a geo-image dataset as a grid, denoted as contained several cells which equals to the geographical similarity threshold in each dimension. Thus, the area of each cell equals . It is clear that the spatial grid is determined by a spatial visual similarity join with dataset and threshold . To put it in another way, for a given dataset , the grid do not need to pre-compute.

Fig. 3(a) shows how to generate candidate pairs based on spatial grid. The number in a cell is the cell id. We assume that a geo-image is located in the cell 57 colored by yellow, denoted . In order to retrieve the candidate pairs , just only the and its eight neighbor cells colored by light yellow need to be accessed due to the restriction of geographical similarity threshold. Therefore, for one geo-image, we only need to check total nine cells to find its partner to form a candidate pair. If the current accessed cell is near the edge of the grid, such as , only six cells should be checked for candidates searching. Thus, using this strategy can reduce the size of search space significantly. We utilize the spatial similarity filter to find the result from these cells mentioned above.

(a) the use of spatial grid
Figure 3: An example of spatial grid

SVS-JOIN Algorithm. Based on spatial grid method, we develop a extension of SVS-JOIN called SVS-JOIN algorithm. Like SVS-JOIN, this algorithm utilize spatial grid strategy. In other words, a spatial grid is constructed for the input dataset as the basic spatial data structure. The geo-images in are then accessed in the ascending order of their cell id. For each cell , this algorithm will get a cells set denoted as , in which the geo-image will be joined with all of the geo-images in . In , the neighbor cells of have smaller id than itself.

There are some differences between SVS-JOIN and SVS-JOIN. For example, SVS-JOIN algorithm constructs an inverted index for all cells in the grid, rather than a global index. Therefore, for each visual word in the global visual dictionary, every cell has its inverted index .

0:   is a geo-image dataset sorted by a global ordering , is the visual similarity threshold, is a geographical similarity threshold.
0:   is the result pairs set.
1:  ;
2:  for each  do
3:      ;
4:      for each  do
5:          ;
6:      end for
7:  end for
8:  return ;
Algorithm 3 SVS-JOIN Algorithm

Algorithm 3 demonstrates the computation process of SVS-JOIN algorithm. Similar to SVS-JOIN algorithm, the input consists of a geo-image dataset sorted by , a visual similarity threshold and a geographical similarity threshold . The first step is to construct a spatial grid for , shown as Line 1. The geo-images are ordered according to cell id and . After this step, it traverses the to search the join cell by cell. For each cell , the procedure is executed to get the cell set . For all the cell , the algorithm executes to return the final results set. It is worth noting that the geo-image located in each cell are checked several times, that means more buffers need to create to store the cells for later processing.

5 The Quadtree Based Global Index Method

In the last section, we introduce the baselines which utilize spatial grid to improve the performance of spatial search. For both SVS-JOIN and SVS-JOIN, we build a spatial grid for at first, and then construct the local inverted index for each cell of the gird. In this section, we propose a novel method to solve the problem of spatial visual similarity joins efficiently based on a global inverted index and quadtree partition strategy.

(a) A quadtree (b) Quadtree partition
Figure 4: An example of quadtree partition

5.1 Quadtree Partition and Global Index

Quadtree Partition. Quadtree is one of the popular spatial indexing structures used in many applications. It aims to partition a 2-dimensional spatial region into 4 subregions in a recursive manner. Fig. 4(a) illustrates an example of quadtree which partitions the spatial region into levels. For -th level, the region is split into equal subregions. Each node of quadtree corresponds to a subregion. The root node of quadtree locate on the -th level, which represents the whole spatial region. Four subnodes in -level are partitioned from the root node in -th level. And the subnodes in -level are split from the nodes in -level as the same manner. From the Fig. 4(a) we can find that there are three colors of nodes. In specific, the light gray nodes are root node and intermediate nodes. The dark gray nodes in any level of the quadtree are the leaf nodes according to the split condition. For each leaf node, there is a list of geo-images in it. In general, the whole spatial region is partitioned into several nodes and the geo-images destribute in these nodes.

Fig. 4(b) shows the partition of the Example 2 by a quadtree. The red color number in quadtree is the node id. Apparently, these 9 geo-images distribute in the subregions. For node 1, denoted as , it contains two geo-image and . As the number of geo-images in is really small, the other nodes contain only one geo-image at most.

Z-Order Curve. In this paper, we utilize Z-order curve to encode the each node of quadtree, which is encoded based on its partition sequence. There is a direct relationship between z-order curve and quadtree. The Z-order curve can describe the path of the node of a quadtree. Fig. 5(a) demonstrates how to generate the Morton code of a subregion based on spatial partition sequence in a region. According to Z-order curve, we denote these 16 subregions from 0 to 15 in decimal, or from 0000 to 1111 in binary. Fig. 5(b) illustrates the Morton code in the quadtree partition of Example 2, we use the code in binary as the node id.

(a) A quadtree (b) Quadtree partition
Figure 5: An example of Z-order

5.2 Svs-Join Algorithm

Based on the quadtree partition and the global inverted index, we develop a novel algorithm called SVS-JOIN to solve the spatial visual similarity joins problem efficiently. Algorithm 4 shows the pseudo-code of this algorithm and Algorithm 5 and Algorithm 6 demonstrates two key procedures applied in SVS-JOIN. The first step of SVS-JOIN is to construct a quadtree to partition the whole spatial region of the input dataset . After that, it executes the procedure to build the global inverted index for each visual word. When building the inverted index lists for geo-image , only the geo-image in the neighbors nodes or the same node need to be considered. Then for each inverted indexing list, the algorithm recalls the start position and end position of each node and the exact position of geo-images for searching. is invoked to search out all the similar geo-image pairs as the result.

0:  a geo-image dataset , a visual similarity threshold , a geographical similarity threshold .
0:  a result pairs set .
1:  ;
2:  ;
3:  ;
4:  for each  do
5:      ;
6:  end for
7:  return ;
Algorithm 4 SVS-JOIN Algorithm
0:  a geo-image dataset , a visual similarity threshold .
0:  a inverted index set .
1:  Initializing: sort the visual words in descending order of number of non-zero entries;
2:  Initializing: Denote the maximum of for all as maxweight of -th visual word;
3:  Initializing: Denote the maximum of from 1 to as maxweight of ;
4:  Initializing: ;
5:  Initializing: ;
6:  Initializing: ;
7:  for each  do
8:       set of geo-iamge in or ;
9:      Denote the maximum of for all as maxweight;
10:      for each in ascending order of  do
11:           maxweight;
12:          if  then
13:              ;
14:          end if
15:      end for
16:  end for
17:  for each  do
18:      Record and of each node in ;
19:      Record the in
20:  end for
21:  return ;
Algorithm 5 GlobalIndexConstructor(,)
0:  a geo-image , a global index set , a visual similarity threshold , a geographical similarity threshold .
0:  a result pairs set .
1:  Initializing: ;
2:  Initializing: ;
3:  Initializing: maxweight;
4:  for each s.t.  do
5:      for each node  do
6:           maxweight;
7:          if  then
8:              ;
9:              ;
10:          else
11:              ;
12:              ;
13:          end if
14:          for each  do
15:              if  equals  then
16:                 Continue;
17:              end if
18:              if  then
19:                 ;
20:              end if
21:              maxweight;
22:          end for
23:      end for
24:  end for
25:  ;
26:  return ;
Algorithm 6 JoinSearch(,,,)

6 Performance Evaluation

In this section, we present results of a comprehensive performance evaluation on real geo-image datasets to evaluate the efficiency and scalability of the proposed approaches. Specifically, we evaluate the efficiency of the following methods.

  • SVS-JOIN. SVS-JOIN is the technique introduced in Section 4.

  • SVS-JOIN. SVS-JOIN is the technique introduced in Section 4.

  • SVS-JOIN. SVS-JOIN is the technique introduced in Section 5.

Datasets. Performance of three algorithms is evaluated on both real spatial and image datasets. The following two datasets are deployed in our experiments. Real image dataset Flickr is obtained by crawling millions image from the popular photo-sharing platform Flickr(http://www.flickr.com/). To evaluate the scalability of our proposed algorithm, The dataset size varies from 100K to 500K. The geo-location information can be obtained from the geo-tag of each image. Similarly, Real dataset ImageNet

is obtained from is the largest image dataset ImageNet, which is widely used in image processing and computer vision. it includes 14,197,122 images and 1.2 million images with SIFT features. We generate

ImageNet dataset with varying size from 100K to 500K. The geographical information of the images are randomly generated from spatial datasets Rtree-Portal (http://www.rtreeportal.org).

Workload. The geo-image dataset size increases from 100K to 500K; the number of the visual words contained in a geo-image grows from 20 to 100; the geographical similarity threshold and visual similarity threshold varies from 0.02 to 0.10 and from 0.5 to 0.9 respectively. By default, The image dataset size, the number of the visual words, the geographical similarity threshold, visual similarity threshold set to 300K, 60, 0.06, 0.7 respectively.

All the Experiments are run on a PC with Intel(R) Xeon 2.60GHz dual CPU and 16G memory running Ubuntu 16.04 LTS. All algorithms in the experiments are implemented in Java. Note that the quadtree of SVS-JOIN method is maintained in memory.

(a) Evaluation on Flickr (b) Evaluation on ImageNet
Figure 6: Evaluation on various dataset size on Flickr and ImageNet

Evaluation on the size of dataset. We evaluate the effect of the size of dataset on Flickr and ImageNet shown in Fig. 6. It is obvious that the response time of SVS-JOIN, SVS-JOIN and SVS-JOIN increase gradually in Fig. 6(a). Specifically, The performance of SVS-JOIN is the worse than two others as no effective spatial search technique in this solution. The time cost of SVS-JOIN fluctuate from about 14 second to 23 second, which is higher than SVS-JOIN because the quadtree and global inverted index based solution is more efficient. Fig. 6(b) illustrates that the evaluation on ImageNet dataset. Similar to the situation on Flickr dataset, the efficiency of SVS-JOIN is the highest. However, with the rising of the dataset size from 100k to 500k, the speed of increment of time cost of SVS-JOIN is higher than the speed on Flickr. On the other hand, the performance of SVS-JOIN is still the worst.

(a) Evaluation on Flickr (b) Evaluation on ImageNet
Figure 7: Evaluation on the number of visual words on Flickr and ImageNet

Evaluation on the number of visual words. We evaluate the effect of the number of visual words on Flickr and ImageNet dataset shown in Figure 7. We can see from Fig. 7(a) that the response time of all these three methods grow step by step with the increment of number of visual words. For SVS-JOIN, when the number of visual words is larger than 40, the growth speed of it is a little faster. Apparently, the response time of it is the highest of them. SVS-JOIN is the most efficient algorithm among them on this dataset. The evaluation on ImageNet dataset is shown in Fig. 7(b). In the interval the growth speed of SVS-JOIN and SVS-JOIN are faster. However, this situation does not appear in SVS-JOIN. There is no doubt the performance of SVS-JOIN is the best, just like the evaluations mentioned above.

(a) Evaluation on Flickr (b) Evaluation on ImageNet
Figure 8: Evaluation on the geographical similarity threshold on Flickr and ImageNet

Evaluation on the geographical similarity threshold. We evaluate the effect of the spatial similarity threshold on Flickr and ImageNet dataset shown in Figure 8. In Figure 8(a), with the increasing of geographical similarity threshold, the response time of all these three algorithms are almost unchanged. For SVS-JOIN, the time cost of it is slightly fluctuated in the interval , which is the highest among them. For SVS-JOIN algorithm, the range of its fluctuation is very small, and this method has the lowest response time. On the other hand, the trend of SVS-JOIN is similar to SVS-JOIN, although its efficiency is not higher than the other one. We can find from Fig. 8(b) that on ImageNet dataset, The trend of them is slightly different from the situation on Flickr. In specific, when threshold , both of SVS-JOIN and SVS-JOIN have a obvious rise. However, SVS-JOIN algorithm seems to be unaffected by the increasing of .

(a) Evaluation on Flickr (b) Evaluation on ImageNet
Figure 9: Evaluation on the visual similarity threshold on Flickr and ImageNet

Evaluation on the visual similarity threshold. We evaluate the effect of the visual similarity threshold on Flickr and ImageNet dataset shown in Figure 9. We can see from the Figure 9(a) that with the rising of visual similarity threshold, the performance of these three algorithm increase rapidly. The efficiency of SVS-JOIN is higher than SVS-JOIN and SVS-JOIN from start to finish. In Fig. 9(b). The response time of them decline gradually but the speed of decrement is a litter slow than the speed on Flickr dataset. Same as the situation above, the performance of SVS-JOIN is the highest.

7 Conclusion

In this paper, we study a novel problem named spatial visual similarity joins (SVS-JOIN for short). Given a set of geo-images which contains geographical information and visual content information, SVS-JOIN aims to search out all the geo-image pairs from the dataset, which are similar to each other in both aspects of geographical similarity and visual similarity. To solve this problem efficiently, we define SVS-JOIN in formal at first time and then propose the geographical and visual similarity function. A baseline named SVS-JOIN is developed by us inspired from the approaches applied on spatial similarity joins. In order to improve the efficiency of searching, we extent this method and propose a novel algorithm called SVS-JOIN which utilizes spatial grid strategy to enhance the performance of spatial retrieval. Besides, we introduce an alternative algorithm named SVS-JOIN which applies quadtree tchnique and a global inverted indexing structure. The experimental evaluation on real geo-multimedia dataset shows that our method has a really high performance.

Acknowledgments: This work was supported in part by the National Natural Science Foundation of China (61702560), project (2018JJ3691, 2016JC2011) of Science and Technology Plan of Hunan Province, and the Research and Innovation Project of Central South University Graduate Students(2018zzts177,2018zzts588).

References

  • (1) Abeywickrama, T., Cheema, M.A., Taniar, D.: k-nearest neighbors on road networks: a journey in experimentation and in-memory implementation. VLDB Endowment (2016)
  • (2) Ballesteros, J., Cary, A., Rishe, N.: Spsjoin:parallel spatial similarity joins. In: ACM Sigspatial International Conference on Advances in Geographic Information Systems, pp. 481–484 (2011)
  • (3) Beckmann, N., Kriegel, H., Schneider, R., Seeger, B.: The r*-tree: An efficient and robust access method for points and rectangles. In: Proceedings of the 1990 ACM SIGMOD International Conference on Management of Data, Atlantic City, NJ, May 23-25, 1990., pp. 322–331 (1990)
  • (4) Cao, X., Chen, L., Cong, G., Jensen, C.S., Qu, Q., Skovsgaard, A., Wu, D., Yiu, M.L.: Spatial keyword querying. In: Conceptual Modeling - 31st International Conference ER 2012, Florence, Italy, October 15-18, 2012. Proceedings, pp. 16–29 (2012)
  • (5) Cao, X., Cong, G., Jensen, C.S., Ooi, B.C.: Collective spatial keyword querying. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2011, Athens, Greece, June 12-16, 2011, pp. 373–384 (2011)
  • (6) Chaudhuri, S., Ganti, V., Kaushik, R.: A primitive operator for similarity joins in data cleaning. In: Proceedings of the 22nd International Conference on Data Engineering, ICDE 2006, 3-8 April 2006, Atlanta, GA, USA, p. 5 (2006)
  • (7) Deng, K., Li, X., Lu, J., Zhou, X.: Best keyword cover search. IEEE Trans. Knowl. Data Eng. 27(1), 61–73 (2015)
  • (8) Dimitrovski, I., Kocev, D., Loskovska, S., Dzeroski, S.: Improving bag-of-visual-words image retrieval with predictive clustering trees. Inf. Sci. 329, 851–865 (2016)
  • (9) Escalante, H.J., Ponce-López, V., Escalera, S., Baró, X., Morales-Reyes, A., Martínez-Carranza, J.: Evolving weighting schemes for the bag of visual words. Neural Computing and Applications 28(5), 925–939 (2017)
  • (10) Fan, J., Li, G., Zhou, L., Chen, S., Hu, J.: Seal: spatio-textual similarity search. Proceedings of the Vldb Endowment 5(9), 824–835 (2012)
  • (11) Felipe, I.D., Hristidis, V., Rishe, N.: Keyword search on spatial databases. In: Proceedings of the 24th International Conference on Data Engineering, ICDE 2008, April 7-12, 2008, Cancún, Mexico, pp. 656–665 (2008)
  • (12) Gao, Y., Qin, X., Zheng, B., Chen, G.: Efficient reverse top-k boolean spatial keyword queries on road networks. IEEE Transactions on Knowledge & Data Engineering 27(5), 1205–1218 (2015)
  • (13) Gao, Y., Zhao, J., Zheng, B., Chen, G.: Efficient collective spatial keyword query processing on road networks. IEEE Trans. Intelligent Transportation Systems 17(2), 469–480 (2016)
  • (14) Guo, T., Cao, X., Cong, G.: Efficient algorithms for answering the m-closest keywords query. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, Melbourne, Victoria, Australia, May 31 - June 4, 2015, pp. 405–418 (2015)
  • (15) Guttman, A.: R-trees: A dynamic index structure for spatial searching. In: SIGMOD’84, Proceedings of Annual Meeting, Boston, Massachusetts, June 18-21, 1984, pp. 47–57 (1984)
  • (16) Hariharan, R., Hore, B., Li, C., Mehrotra, S.: Processing spatial-keyword (SK) queries in geographic information retrieval (GIR) systems. In: 19th International Conference on Scientific and Statistical Database Management, SSDBM 2007, 9-11 July 2007, Banff, Canada, Proceedings, p. 16 (2007)
  • (17) Karakasis, E.G., Amanatiadis, A., Gasteratos, A., Chatzichristofis, S.A.: Image moment invariants as local features for content based image retrieval using the bag-of-visual-words model. Pattern Recognition Letters 55(C), 22–27 (2015)
  • (18) Ke, Y., Sukthankar, R.: PCA-SIFT: A more distinctive representation for local image descriptors. In: 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), with CD-ROM, 27 June - 2 July 2004, Washington, DC, USA, pp. 506–513 (2004)
  • (19) Lee, K.C.K., Lee, W., Zheng, B.: Fast object search on road networks. In: EDBT 2009, 12th International Conference on Extending Database Technology, Saint Petersburg, Russia, March 24-26, 2009, Proceedings, pp. 1018–1029 (2009)
  • (20) Li, C., Ma, L.: A new framework for feature descriptor based on SIFT. Pattern Recognition Letters 30(5), 544–557 (2009)
  • (21) Li, G., Xu, J., Feng, J.: Keyword-based k-nearest neighbor search in spatial databases. In: ACM International Conference on Information and Knowledge Management, pp. 2144–2148 (2012)
  • (22) Li, W., Guan, J., Zhou, S.: Efficiently evaluating range-constrained spatial keyword query on road networks. In: Database Systems for Advanced Applications - 19th International Conference, DASFAA 2014, International Workshops: BDMA, DaMEN, SIM - 3 - , UnCrowd; Bali, Indonesia, April 21-24, 2014, Revised Selected Papers, pp. 283–295 (2014)
  • (23) Li, Z., Lee, K.C.K., Zheng, B., Lee, W., Lee, D.L., Wang, X.: Ir-tree: An efficient index for geographic document search. IEEE Trans. Knowl. Data Eng. 23(4), 585–599 (2011)
  • (24) Liao, K., Liu, G., Hui, Y.: An improvement to the SIFT descriptor for image representation and matching. Pattern Recognition Letters 34(11), 1211–1220 (2013)
  • (25) Lin, X., Xu, J., Hu, H.: Reverse keyword search for spatio-textual top-$k$ queries in location-based services. IEEE Trans. Knowl. Data Eng. 27(11), 3056–3069 (2015)
  • (26) Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV, pp. 1150–1157 (1999)
  • (27) Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)
  • (28) Mandal, R., Roy, P.P., Pal, U., Blumenstein, M.: Bag-of-visual-words for signature-based multi-script document retrieval. CoRR abs/1807.06772 (2018)
  • (29) Mortensen, E.N., Deng, H., Shapiro, L.G.: A SIFT descriptor with global context. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), 20-26 June 2005, San Diego, CA, USA, pp. 184–190 (2005)
  • (30) Rocha-Junior, J.B., Gkorgkas, O., Jonassen, S., Nørvåg, K.: Efficient processing of top-k spatial keyword queries. In: Advances in Spatial and Temporal Databases - 12th International Symposium, SSTD 2011, Minneapolis, MN, USA, August 24-26, 2011, Proceedings, pp. 205–222 (2011)
  • (31) Rocha-Junior, J.B., Nørvåg, K.: Top-k spatial keyword queries on road networks. In: 15th International Conference on Extending Database Technology, EDBT ’12, Berlin, Germany, March 27-30, 2012, Proceedings, pp. 168–179 (2012)
  • (32) dos Santos, J.M., de Moura, E.S., da Silva, A.S., da Silva Torres, R.: Color and texture applied to a signature-based bag of visual words method for image retrieval. Multimedia Tools Appl. 76(15), 16855–16872 (2017)
  • (33) Sivic, J., Zisserman, A.: Video google: A text retrieval approach to object matching in videos. In: 9th IEEE International Conference on Computer Vision (ICCV 2003), 14-17 October 2003, Nice, France, pp. 1470–1477 (2003)
  • (34) Su, M., Ma, Y., Zhang, X., Wang, Y., Zhang, Y.: Mbr-sift: A mirror reflected invariant feature descriptor using a binary representation for image matching. Plos One 12(5) (2017)
  • (35) Sun, W.W., Chen, C.N., Zhu, L., Gao, Y.J., Jing, Y.N., Li, Q.: On efficient aggregate nearest neighbor query processing in road networks. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 30(4), 781–798 (2015)
  • (36) Wang, X., Zhang, Y., Zhang, W., Lin, X., Wang, W.: Ap-tree: Efficiently support continuous spatial-keyword queries over stream. In: 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015, pp. 1107–1118 (2015)
  • (37) Wang, Y., Huang, X., Wu, L.: Clustering via geometric median shift over riemannian manifolds. Information Sciences 220, 292–305 (2013)
  • (38) Wang, Y., Lin, X., Wu, L., Zhang, Q., Zhang, W.: Shifting multi-hypergraphs via collaborative probabilistic voting. Knowledge and Information Systems 46, 515–536 (2016)
  • (39) Wang, Y., Lin, X., Wu, L., Zhang, W.: Effective multi-query expansions: Robust landmark retrieval. In: Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, MM ’15, Brisbane, Australia, October 26 - 30, 2015, pp. 79–88 (2015)
  • (40) Wang, Y., Lin, X., Wu, L., Zhang, W.: Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval. IEEE Trans. Image Processing 26(3), 1393–1404 (2017)
  • (41) Wang, Y., Lin, X., Wu, L., Zhang, W., Zhang, Q.: Exploiting correlation consensus: Towards subspace clustering for multi-modal data. In: Proceedings of the ACM International Conference on Multimedia, MM ’14, Orlando, FL, USA, November 03 - 07, 2014, pp. 981–984 (2014)
  • (42) Wang, Y., Lin, X., Wu, L., Zhang, W., Zhang, Q.: LBMCH: learning bridging mapping for cross-modal hashing. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, August 9-13, 2015, pp. 999–1002 (2015)
  • (43) Wang, Y., Lin, X., Wu, L., Zhang, W., Zhang, Q., Huang, X.: Robust subspace clustering for multi-view data by exploiting correlation consensus. IEEE Trans. Image Processing 24(11), 3939–3949 (2015)
  • (44) Wang, Y., Lin, X., Zhang, Q.: Towards metric fusion on multi-view data: a cross-view based graph random walk approach. In: 22nd ACM International Conference on Information and Knowledge Management, CIKM’13, San Francisco, CA, USA, October 27 - November 1, 2013, pp. 805–810 (2013)
  • (45) Wang, Y., Lin, X., Zhang, Q., Wu, L.: Shifting hypergraphs by probabilistic voting. In: Advances in Knowledge Discovery and Data Mining - 18th Pacific-Asia Conference, PAKDD 2014, Tainan, Taiwan, May 13-16, 2014. Proceedings, Part II, pp. 234–246 (2014)
  • (46)

    Wang, Y., Wu, L.: Beyond low-rank representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    Neural Networks 103, 1–8 (2018)
  • (47) Wang, Y., Wu, L., Lin, X., Gao, J.: Multiview spectral clustering via structured low-rank matrix factorization. IEEE Trans. Neural Networks and Learning Systems 29(10), 4833–4843 (2018)
  • (48) Wang, Y., Zhang, W., Wu, L., Lin, X., Fang, M., Pan, S.: Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering.

    In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pp. 2153–2159 (2016)

  • (49) Wang, Y., Zhang, W., Wu, L., Lin, X., Zhao, X.: Unsupervised metric fusion over multiview data by graph random walk-based cross-view diffusion. IEEE Trans. Neural Netw. Learning Syst. 28(1), 57–70 (2017)
  • (50) Wu, L., Wang, Y.: Robust hashing for multi-view data: Jointly learning low-rank kernelized similarity consensus and hash functions. Image Vision Comput. 57, 58–66 (2017)
  • (51) Wu, L., Wang, Y., Gao, J., Li, X.: Deep adaptive feature embedding with local sample distributions for person re-identification. Pattern Recognition 73, 275–288 (2018)
  • (52) Wu, L., Wang, Y., Gao, J., Li, X.: Where-and-when to look: Deep siamese attention networks for video-based person re-identification. arXiv:1808.01911 (2018)
  • (53)

    Wu, L., Wang, Y., Ge, Z., Hu, Q., Li, X.: Structured deep hashing with convolutional neural networks for fast person re-identification.

    Computer Vision and Image Understanding 167, 63–73 (2018)
  • (54) Wu, L., Wang, Y., Li, X., Gao, J.: Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Trans. Cybernetics (2018)
  • (55) Wu, L., Wang, Y., Li, X., Gao, J.: What-and-where to match: Deep spatially multiplicative integration networks for person re-identification. Pattern Recognition 76, 727–738 (2018)
  • (56) Wu, L., Wang, Y., Shao, L.: Cycle-consistent deep generative hashing for cross-modal retrieval. CoRR abs/1804.11013 (2018)
  • (57) Wu, L., Wang, Y., Shepherd, J.: Efficient image and tag co-ranking: a bregman divergence optimization method. In: ACM Multimedia Conference, MM ’13, Barcelona, Spain, October 21-25, 2013, pp. 593–596 (2013)
  • (58) Xiao, C., Wang, W., Lin, X., Yu, J.X., Wang, G.: Efficient similarity joins for near-duplicate detection. ACM Trans. Database Syst. 36(3), 15:1–15:41 (2011)
  • (59) Zhang, C., Zhang, Y., Zhang, W., Lin, X.: Inverted linear quadtree: Efficient top k spatial keyword search. In: 29th IEEE International Conference on Data Engineering, ICDE 2013, Brisbane, Australia, April 8-12, 2013, pp. 901–912 (2013)
  • (60) Zhang, C., Zhang, Y., Zhang, W., Lin, X.: Inverted linear quadtree: Efficient top K spatial keyword search. IEEE Trans. Knowl. Data Eng. 28(7), 1706–1721 (2016)
  • (61) Zhang, C., Zhang, Y., Zhang, W., Lin, X., Cheema, M.A., Wang, X.: Diversified spatial keyword search on road networks. In: Proceedings of the 17th International Conference on Extending Database Technology, EDBT 2014, Athens, Greece, March 24-28, 2014., pp. 367–378 (2014)
  • (62) Zhang, D., Chan, C., Tan, K.: Processing spatial keyword query as a top-k aggregation query. In: The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’14, Gold Coast , QLD, Australia - July 06 - 11, 2014, pp. 355–364 (2014)
  • (63) Zhang, D., Chee, Y.M., Mondal, A., Tung, A.K.H., Kitsuregawa, M.: Keyword search in spatial databases: Towards searching by document. In: Proceedings of the 25th International Conference on Data Engineering, ICDE 2009, March 29 2009 - April 2 2009, Shanghai, China, pp. 688–699 (2009)
  • (64) Zhang, D., Tan, K., Tung, A.K.H.: Scalable top-k spatial keyword search. In: Joint 2013 EDBT/ICDT Conferences, EDBT ’13 Proceedings, Genoa, Italy, March 18-22, 2013, pp. 359–370 (2013)
  • (65) Zhang, F., Song, Y., Cai, W., Hauptmann, A.G., Liu, S., Pujol, S., Kikinis, R., Fulham, M.J., Feng, D.D., Chen, M.: Dictionary pruning with visual word significance for medical image retrieval. Neurocomputing 177, 75–88 (2016)
  • (66) Zheng, K., Su, H., Zheng, B., Shang, S., Xu, J., Liu, J., Zhou, X.: Interactive top-k spatial keyword queries. In: 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015, pp. 423–434 (2015)
  • (67) Zhong, R., Li, G., Tan, K., Zhou, L., Gong, Z.: G-tree: An efficient and scalable index for spatial search on road networks. IEEE Trans. Knowl. Data Eng. 27(8), 2175–2189 (2015)
  • (68) Zhu, Y., Cheng, S., Stanković, V., Stanković, L.: Image registration using bp-sift. Journal of Visual Communication & Image Representation 24(4), 448–457 (2013)