Image Classification Based on Quantum KNN Algorithm

05/16/2018 ∙ by Yijie Dang, et al. ∙ Beijing University of Technology 0

Image classification is an important task in the field of machine learning and image processing. However, the usually used classification method --- the K Nearest-Neighbor algorithm has high complexity, because its two main processes: similarity computing and searching are time-consuming. Especially in the era of big data, the problem is prominent when the amount of images to be classified is large. In this paper, we try to use the powerful parallel computing ability of quantum computers to optimize the efficiency of image classification. The scheme is based on quantum K Nearest-Neighbor algorithm. Firstly, the feature vectors of images are extracted on classical computers. Then the feature vectors are inputted into a quantum superposition state, which is used to achieve parallel computing of similarity. Next, the quantum minimum search algorithm is used to speed up searching process for similarity. Finally, the image is classified by quantum measurement. The complexity of the quantum algorithm is only O((kM)^(1/2)), which is superior to the classical algorithms. Moreover, the measurement step is executed only once to ensure the validity of the scheme. The experimental results show that, the classification accuracy is 83.1 existing classical algorithms. Hence, our quantum scheme has a good classification performance while greatly improving the efficiency.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In 1982, Feynman proposed a novel computation model, named quantum computation. Due to the superposition and entanglement properties of the quantum states, this novel computation model can efficiently solve some problems that are believed to be intractable on classical computers1 . After that, many researchers devoted themselves to the research about quantum computation. In particular, in 1994, Shor designed a quantum integer factoring algorithm which can be done in polynomial time and the exponential speedup is achieved compared with the classical algorithm2 . In 1996, Grover’s algorithm went quadratically faster than any possible classical algorithms3 . In addition, quantum image processing (QIP) is an important and rapidly developing sub-domain in quantum computation. Various quantum image representations have been proposed, such as flexible representation of quantum images4 , RGB multi-channel representation5 , novel enhanced quantum representation6 , generalized quantum image representation7 and Red-Green-Blue multi-channel quantum representation8 . Some quantum image processing algorithms have been developed based on these representation schemes, such as quantum image scrambling9 ; 10 , quantum image steganography algorithm11 ; 12 , quantum image matching13 , quantum binary images thinning algorithm14 , quantum watermarking15 ; 16 , quantum image edge detection17 , quantum image motion detection18 ; 19 , quantum image searching20 , quantum image metric21 and so on.

Image classification is an important task in the field of machine learning and image processing, which is widely used in many fields, such as computer vision, network image retrieval and military automation target identification. K Nearest-Neighbor (KNN) algorithm is one of the typical and simplest method to do image classification. KNN’s basic idea is that if the majority of the

nearest samples of an image in the feature space belong to a certain category, the image also belongs to this category. It has two core processes: similarity computing and nearest samples searching. Since KNN requires no learning and training phases and avoids overfitting of parameters, it has a good accuracy in dealing with classification tasks with more samples and less classes. Researchers propose many improved KNN algorithms22 ; 23 ; 24 ; 25 ; 26 ; 27 ; 28 ; 29 ; 30 . However, KNN and its improved algorithms are accompanied by a large amount of computation and have high complexity. In particular, the complexity of similarity computing process is and the complexity of searching process is , where is the number of training images.

Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields31 ; 32 ; 33 ; 34

. Quantum mechanics offers tantalizing prospects to enhance machine learning, ranging from reduced computational complexity to improved generalization performance. The most notable examples include quantum enhanced algorithms for principal component analysis

35

, quantum support vector machines

36

, quantum Boltzmann machines

37 , and so on38 ; 39 ; 40

. Ruan proposed a global quantum feature extraction method based on Schmidt decomposition, and also proposed a revised quantum learning algorithm that will classify images by computing the Hamming distance of these features

41 . But the features and distances used by this algorithm limit the classification effect. Chen proposed quantum K Nearest-Neighbor algorithm42 (QKNN) which realized KNN in quantum computers. QKNN uses quantum superposition states to achieve parallel computing of similarity and uses the quantum minimum search algorithm to speed up search process for similarity. Compared to classical algorithms, the complexity of similarity computing process is and the complexity of searching process is . Ref. 43 realized another QKNN algorithm based on the metric of Hamming distance. Its complexity is . Although Ref. 42 ; 43 show the process of QKNN and give the complexity analysis, they do experiments only on the image dataset of ten handwritten digits (0 to 9), instead of on natural image datasets.

In this paper, we propose an efficient natural image classification scheme based on QKNN. Firstly, the feature vectors of images are extracted. Then the feature vectors are stored in quantum superposition state which is used to achieve parallel computing of similarity. Next, the quantum minimum search algorithm is used to speed up searching process for similarity. Finally, the image is classified by quantum measurement. Moreover, the measurement step is executed only once to ensure the validity of the scheme.

The rest of this paper is organized as follows. Firstly, our scheme is described. Then, complexity analysis and accuracy analysis are given respectively. Finally, we draw conclusions and outline possible future works.

2 Image classification

In this section, we show how to apply the QKNN algorithm to image classification and describe our solution.

2.1 basic idea

Image classification is the process that assigns an image to a corresponding class according to certain rules with a high degree of confidence. The basic ideas of our image classification scheme based on QKNN is shown in Fig. 1

. Images that have been classified are training images and the unclassified image is the test image. The test image is classified according to training images. Firstly, the feature vectors of all the images are extracted on a classical computer, which consists of color features and texture features. Then, the feature vectors are inputted into a quantum computer by applying a process for preparing quantum state. Next, distance between the test image and the training images, that is used to describe the similarity between them, is computed in parallel by a quantum circuit and is stored in the amplitude by applying the amplitude estimation algorithm. Then, applying Dürr’s finding minimum algorithm (which will be described in detail in Step 4 in Section 2.2)

44 to get the minimum distances from quantum superposition states32 . Finally, indexes of the similar images are obtained by measurement and the final classification result is produced by majority voting.

Figure 1: Algorithm flow chart.

2.2 Algorithm

Assuming that all the images are RGB images. is the complete dataset of the whole task, where is the number of training images.

Step 1 Extract features of images.

In this step, feature vectors (including color features and texture features) of the test image and the training images are extracted. The process of feature extraction is divided into three sub-steps.

Step 1.1 Extract color features of images

is transformed from RGB (red, green, blue) color model to HSB model, where is the hue, is the saturation and is the brightness. Then according to the human visual characteristics, space is divided into 8 levels, space and space are divided into three levels respectively based on Eq. (1), where , and are quantized values of , , and .

(1)

One dimensional feature vector is synthesized by the 3 components:

(2)

where and are weights of and , respectively. The larger the value of and , the higher the accuracy of quantization, but the more the classification time. In the experiment of Ref. 45 , the effect is better when and . Thus, .

Then by calculating histogram of , color feature vector is obtained. In order to eliminate difference in the size of images, is normalized.

Step 1.2 Extract texture features of images.

is converted to a gray-scale image and the gray level co-occurrence matrix in four directions: , , and

, is computed. Then the mean and the variance of the contrast, correlation, energy and entropy compose the texture feature vector

.

Since the physical meaning of each value is different and the size of the value varies, vector is normalized, which is the texture feature vector.

Step 1.3 Combine texture feature vector and color feature vector.

For the sake of simplicity, the color feature vector and texture feature vector are combined into a one-dimensional vector which is denoted as . Hence, is the feature vector of the test image and is the feature vector of the training image .

Step 2 Store feature vectors to quantum state.

This step is used to store feature vectors and into quantum states and respectively46 ; 47 ; 48 .

(3)
(4)

where is the dimension of the feature vector.

The preparation consists of two main steps: prepare a number of initial qubits

and then store feature vectors. Firstly, we describe the preparation of . From Eqs. (3) and (4), it is obviously that the preparation of is similar to and easier than that of . The complete preparation process is as follows.

For , an initial superposition state is prepared by using Hadamard (H) gates on initial qubits as shown in Fig. 2, where . Due to the fact that may not always be a power of 2, a judgement is needed:

(5)

where stores number as a binary. is realized by QCMP (the Quantum Comparator49 ) model as shown in Fig. 2. QCMP is used to determine is smaller or bigger than , and the last two qubits are two flag qubits to store the comparison results. Only if the two flags are in state , satisfies , i.e.,

is meaningful. The probability is

. For more details on QCMP, please refer to Ref. 49 .

Figure 2: Quantum circuit of preparing the initial superposition state .

The initial superposition state is prepared by the same procedure as the preparation of . After the above steps, initial quantum state is prepared.

(6)

Next, the feature value is set by using rotation matrix and controlled rotation gate .

(7)
(8)

By defining , we can get

(9)

It is denoted as

(10)

and denote the inverse transformation of as .

Then by acting to the last qubit of , we can get

(11)

This process is denoted as , where is a unitary operation which rotates around the axis.

(12)

Finally apply to to clear auxiliary qubit and call as .

The preparation of is similar to that of . Firstly, an initial superposition state is prepared using Hadamard (H) gates as shown in Fig. 2. Then apply , and to initial superposition state in turn where and .

Step 3 Compute distances.

This step computes distances between the test image and the training images by applying controlled swap gate50 ; 51 .

Fig. 3 shows the quantum circuit, in which controlled swap gate is used to implement operation: . Firstly, the auxiliary qubit is mapped to by Hadamard gate. Then this superposition state acts as a control qubit for the swap gate to get . Finally, after the auxiliary qubit passes through H gate again, quantum state is changed to

(13)
Figure 3: Controlled swap gate.

The probability of is denoted as . Thus,

(14)

Feature vectors is normalized so that and . Hence,

(15)

When takes a particular state , , where is the inner product of vector and . We use to represent the similarity between vectors, and denote it as . Hence, the larger the inner product of the two vectors, the smaller and the more smiliar the two images.

Thus, as shown in Fig. 3, by acting controlled swap gate on and , we can get

(16)

Then we use Amplitude Estimation(AE)52 algorithm to transfer distance information to qubits. Due to the space limitations, please refer to Ref. 52 to get the quantum circuit, the theory proof, and more details about AE. This process used iterations of Grover operator and the error is less than , where and satisfy .

AE helps us to get quantum state that stored the similarity information.

(17)

Step 4 Search for minimum distances.

In this step, we apply Dürr’s algorithm44 ; 53 to state to find the minimum. Ref. 44 shows that this algorithm will return the minimum after iterations. Fig. 4 shows the circuit model of this process. The searching process is as follows.

1) Define as indexes of images that similar to . Initially, the indexes are selected randomly.

2) Use Grover’s algorithm to find the quantum state which satisfies . By finite iterations of Grover, we can get an index which satisfies this condition. That is to say, the training image is more similar to the test image than the image with index .

Figure 4: Circuit model of search process.

3) Set to , where makes .

4) Repeat 2), 3) and interrupt it after iterations.

For each iteration, the index stored in qubits is replaced by an index with a smaller distance value. Finally, indexes of the images that are most similar to the test image are stored in auxiliary qubits .

Step 5 Measure and classify.

Measure and get search results that are indexes of images which are similar to the test image. According to the basic idea of KNN algorithm, classify the test image to a class whose samples is the majority in results. If such a class is more than one, choose the first. In this step, the measurement step is executed only once.

3 Complexity Analysis

Our scheme consists of two processes: classical feature extraction and quantum classification based on QKNN, in which the latter is the main part.

In classical part, the complexity of extracting color features is , the complexity of extracting texture features is and the complexity of Combination of features is . Thus the complexity of Step 1 is .

We focus on the complexity of the quantum part from the preparation of quantum feature vectors to searching for the minimum.

Some steps are based on Oracle. In order to discuss the complexity uniformly, we define Oracle as the basic unit. The complexity of each step is analyzed as follows.

Step 2: preparation process of and includes , , and . uses some H gates in parallel, one CNOT gate and one quantum comparator. Hence, its complexity is 3. , and are regarded as 3 Oracles. This step prepares two quantum states so that it needs 6 Oracles. Hence the time complexity is , which is a constant.

Step 3: computing distance requires one controlled swap gate and its complexity is . Ref. 52 indicates AE need iterations of Grover operator and a Grover operator needs 12 Oracles. Thus the complexity of this process is .

Step 4: according to the previous description and the conclusion of Ref. 44 ; 53 , the complexity of this step is .

So the total complexity of the quantum part is

(18)

Established classical image classification algorithm also has the feature extraction step, which is the same as Step 1 of our scheme. That is to say, their complexities are the same. Hence, in the following, we compare the quantum algorithm and the classical algorithm only from the two core processes: similarity computing and searching.

  1. If the effect of the dimension of the feature vector is not taken into account and view a computation distance as a unit, the complexity of the classical similarity computation is . Correspondingly, quantum algorithm only uses a controlled swap gate, whose complexity is a constant . It shows that our algorithm achieves the acceleration from linear complexity to constant complexity.

  2. The complexity of searching based on the sorting algorithm is not less than on classical computers. However the complexity of quantum search process is .

We make a detailed comparison between classical search complexity and quantum search complexity . Considering the effect of on the experimental accuracy, . The result is shown in Fig. 5. Whatever takes, the complexity of classical search process is higher than the quantum one. Moreover, the larger the , the greater the quantum advantage. Therefore, quantum search algorithm achieved speedup.

Figure 5: Complexity comparison between classical and quantum algorithms in search process. is the number of training images.

In addition, quantum KNN has two more processes than KNN, which are quantum state preparation and AE. These two processes has only constant complexity. Hence, it’s not taken into account.

To sum up, our algorithm significantly reduces the complexity of image classification based on KNN algorithm.

4 Simulation-based experiments

Section 4.1 gives a simple experiment with 10 images to further show the details of our algorithm. Section 4.2 and 4.3 use two widely used image sets — Graz-01 and Caltech-101 to demonstrate the accuracy of the algorithm, which have 833 images in 2 classes and 2921 images in 9 classes respectively.

4.1 A simple experiment

Caltech-101 is a data set of digital images that is intended to facilitate computer vision research and techniques. Moreover, it is most applicable to techniques involving image recognition classification and categorization54 . Ten RGB images which include five images from the airplanes class and five images from the Leopards class make up the training dataset as shown in Fig. 6. The test image is shown in Fig. 7.

Figure 6: Training images
Figure 7: Test image

Firstly, the feature vectors of all the images are extracted according to Step 1. Considering the high dimensionality of the vectors (80), for brevity, only four variables are shown in Table 1.

Image *
Test image 0 0.00002633 0.00078124 0 0.09610999
airplanes_1 1 0.00008357 0.00005014 0 0.11966605
airplanes_2 2 0.00492092 0.00180055 0 0
airplanes_3 3 0.00196011 0.00013930 0 0.06067670
airplanes_4 4 0.00199697 0.00004851 0 0.09119511
airplanes_5 5 0.00000771 0.00006166 0 0.18334765
Leopards_1 6 0.00085182 0 0.00518836 0.59927777
Leopards_2 7 0.00110141 0 0.01474394 0.57039998
Leopards_3 8 0.00043356 0 0.01250811 0.53275436
Leopards_4 9 0.00127659 0 0.01514383 0.57050397
Leopards_5 10 0.00089379 0 0.01117236 0.56351601
Table 1: Representative part of the feature vectors.

In Step 2, feature vectors of the 10 training images are stored in the quantum state and feature vector of the test image is stored in the quantum state . Since for each image, its feature vector’s dimension is 80 and there are 10 images, i.e., and . That is to say, in this example and by substituting and into Eq. (3) and Eq. (4).

and are the inputs of controlled swap gate in Step 3 and we can get distances between the test image and the training images which are stored in . In Eq. (19), is the distance between the test image and the training image. Actual data is shown in Table 2.

(19)

Distance values are stored in quantum state by acting AE on .

(20)
Image Ranking
airplanes_1 2 0.0349247032450856
airplanes_2 1 0.0227572679900773
airplanes_3 3 0.0524175219266468
airplanes_4 4 0.0896693757206786
airplanes_5 5 0.126960495310553
Leopards_1 9 0.478449921552929
Leopards_2 8 0.474949680247898
Leopards_3 6 0.470301163393825
Leopards_4 7 0.474904076167589
Leopards_5 10 0.485184863755358
Table 2: Distances between the test image and the training images.

In Step 4, by supposing , the initial quantum state is . After iterations of Dürr’s algorithm, the smallest distances are searched and are stored in auxiliary qubits. Thus we can get .

Through measuring, results are binary string 0010, 0001 and 0011, i.e., decimal number 2, 1 and 3 respectively, which are exactly indexes of the smallest distances in Table 2. Thus, the training image is similar to the test image airplanes_2.jps, airplanes_1.jps and airplanes_3.jps. Since the three training images are belonged to airplanes class, the test image is also belonged to airplanes class.

4.2 Experiment on Graz-01 dataset

The Graz-01 database55 has two object-classes (bikes and persons), and one background-class. It is widely used in image classification tasks to compare the accuracy and effectiveness of different methods. We apply our algorithm to this database and demonstrate the performance of our algorithm. Due to characteristics of KNN, we remove the background class of the database. Thus experiment dataset contains two object-classes (bikes and persons) which has total 833 images. Simulation experiment data is shown in Fig. 8.

Figure 8: Performance on the Graz-01 dataset.

In the experiments, the maximum value of accuracy is 83.1% when and training 90% data. When the average value reaches 75.3% and it is not less than 70% even if the training data is relatively less. It indicates that the algorithm has advantages even if training data is less.

We compare the accuracy of our scheme with the experimental data in Ref. 29 ; 30 because their hypothetical conditions are close. Table 3 shows that the accuracy of Bikes in our algorithm is lower than that of the other two algorithms and the accuracy of People is higher than the others. Overall, the accuracy of our algorithm is almost the same as other algoritms and is acceptable. It shows that our algorithm improve efficiency while ensuring an acceptable accuracy.

Class QKNN Opelt29 Lazebnik30
Bikes 77 86.5 86.3
Peoples 84.8 80.8 82.3
Table 3: Comparison of the accuracy of different algorithms.

4.3 Experiment on Caltech-101 dataset

Caltech-101 has 101 classes (animals, furniture, flowers, and etc.)54 . Because KNN is suitable for classification tasks with more samples and less classes, we choose 9 classes which have more than 100 samples. Hence our dataset contains 9 classes (airplanes, bonsai, chandelier, etc.) with 2921 images. Fig. 9 shows the results, in which, the accuracy reaches 78% when and the training ratio that is the proportion of training data to all data is 90%. Similar to the previous experiment, the accuracy is increased with the increment of training proportion.

Figure 9: Performance on the Caltech-101 dataset.

Since quantum machine learning is new research field, and the existing QKNN researches do not do experiments on natural images, there is lack of comparison data in quantum field. Hence, our experimental data is compared only with that of classical algorithms. Through the simulation experiments on Graz-01 and Caltech-101 datasets, the quantum scheme maintains good accuracy while greatly improving the efficiency. And quantum scheme has an acceptable accuracy even if the training ratio is low.

5 Conclusions

This paper uses the powerful parallel computing ability of quantum computers to optimize the efficiency of image classification. The scheme is based on quantum KNN. Firstly, the feature vectors of images are extracted on classical computers. Then the feature vectors are inputted into quantum superposition state, which is used to achieve parallel computing of similarity. Next, the quantum minimum search algorithm is used to speed up searching process for similarity. Finally, the image is classified by quantum measurement. The complexity of the quantum algorithm is only , which is superior to the classical algorithms. The measurement step is executed only once to ensure the validity of the scheme. Moreover, our quantum scheme has a good classification performance while greatly improving the efficiency.

There are some things that can be done as future works. Firstly, QKNN is not the only solution for image classification. Image classification algorithms based on more progressive and more subtle machine learning algorithms should be studied to improve the accuracy and application range in the future. Secondly, the method of feature extraction in our scheme is relatively simple and it is done in classical computers. Hence, optimizing feature extraction algorithms and designing quantum implementation solutions is one of the future work.

References

  • (1) Feynman, R.P.:Simulating physics with computers. Int. J. Theor. Phys. 21(6), 467-488 (1982)
  • (2) Shor, P.W.:Algorithms for quantum computation: discrete logarithms and factoring. IEEE Computer Society, 124-134 (1994)
  • (3) Grover, L.K.:A fast quantum mechanical algorithm for database search. ACM. 212-219 (1996)
  • (4) Le, P.Q., Dong F., Hirota, K.:A flexible representation of quantum images for polynomial preparation, image compression, and processing operations. Quantum Inf. Process. 10(1), 63 (2011)
  • (5) Sun, B., Iliyasu, A.M., Yan, F., Dong, F., Hirota, K.:An RGB multi-channel representation for images on quantum computers. JACIII. 17(3), 404-417 (2013)
  • (6) Zhang, Y., Lu, K., Gao, Y.H., Wang, M.:NEQR: a novel enhanced quantum representation of digital images. Quantum Inf. Process. 12(8), 2833-2860 (2013)
  • (7)

    Jiang, N., Wang, J., Mu, Y.:Quantum image scaling up based on nearest-neighbor interpolation with integer scaling ratio. Quantum Inf. Process.

    14(11), 1-26 (2015)
  • (8) Abdolmaleky, M., Naseri, M., Batle, J., Farouk, A., Gong, L.H.:Red-Green-Blue multi-channel quantum representation of digital images. Optik 128, 121-132 (2017)
  • (9) Jiang, N,. Wang, L., Wu, W.Y.:Quantum Hilbert image scrambling. Int. J. Theor. Phys. 53(7), 2463-2484 (2014)
  • (10) Beheri, M.H., Amin, M., Song, X.H., El-Latif, A.A.A.:Quantum image encryption based on scrambling-diffusion (SD) approach. International Conference on Frontiers of Signal Processing. IEEE, 43-47 (2017)
  • (11) Jiang, N., Zhao, N., Wang, L.:LSB based quantum image steganography algorithm. Int. J. Theor. Phys. 55(1), 107-123 (2016)
  • (12) Al-Salhi, Y.E.A., Lu, S.F.:Quantum image steganography and steganalysis based on LSQu-Blocks image information concealing algorithm. Int. J. Theor. Phys. 55(8), 3722-3736 (2016)
  • (13) Jiang, N., Dang, Y.J., Wang, J.:Quantum image matching. Quantum Inf. Process. 15(9), 3543-3572 (2016)
  • (14) Naseri, M., Heidari, S., Gheibi, R., Gong, L.H., Rajii, M.A., Sadri, A.:A novel quantum binary images thinning algorithm: A quantum version of the Hilditch’s algorithm. Optik, 131 (2017)
  • (15) Heidari, S., Naseri, M.:A novel LSB based quantum watermarking. Int. J. Theor. Phys. 55(10), 4205-4218 (2016)
  • (16) Naseri, M., Heidari, S., Baghfalaki, M., fatahi, N., Gheibi, R., Batle, J., Farouk, A., Habibi, A.:A new secure quantum watermarking scheme. Optik - International Journal for Light and Electron Optics 139, 77-86 (2017)
  • (17) Yao, X.W., Wang, H., Liao, Z., Chen, M.C., Pan, J., Li, J., Zhang, K., Lin, X., Wang, Z., Luo, Z.:Quantum image processing and its application to edge detection: Theory and experiment. Phys. Rev. X. 7(3), (2017)
  • (18) Yan, F., Iliyasu, A.M., Khan, A.R, Yang, H.:Measurements-based moving target detection in quantum video. Int. J. Theor. Phys. 55(4), 2162-2173 (2016)
  • (19) Pan, J.S., Tsai, P.W., Huang, H.C.:Advances in intelligent information hiding and multimedia signal processing. Smart Innovation Systems and Technologies 81 (2017)
  • (20)

    Yan, F., Iliyasu, A.M., Fatichah, C., Tangel, M.L., Betancourt, J.P., Dong, F., Hirota, K.:Quantum image searching based on probability distributions. Journal of Quantum Information Science

    2(3), 55-60 (2012)
  • (21) Iliyasu, A.M., Yan, F., Hirota, K.:Metric for estimating congruity between quantum images. Entropy 18(10), 360 (2016)
  • (22) Fukunaga, K., Narendra, P.M.:A branch and bound algorithm for computing K-nearest neighbors. IEEE Transactions on Computers C-24(7), 750 (1975)
  • (23) Beckmann, N., Kriegel, H.P., Schneider, R., Seeger, B.:The R*-tree: An efficient and robust access method for points and rectangles, SIGMOD Rec. 19(2), 322-331 (1990)
  • (24) White, D.A., Jain, R.:Similarity indexing with the SS-tree. Proceedings of the Twelfth International Conference on Data Engineering, 516-523 (1996)
  • (25) Katayama, N., Satoh, S.:The SR-tree: An index structure for high-dimensional nearest neighbor queries. SIGMOD Rec. 26(2), 369-380 (1997)
  • (26) Goodsell, G.:On finding P-th nearest neighbours of scattered points in two dimensions for small p. Computer Aided Geometric Design 17(4), 387-392 (2000)
  • (27) Piegl, L.A., Tiller, W.:Algorithm for finding all k nearest neighbors. Computer-Aided Design 34(2), 167 (2002)
  • (28)

    Boiman, O., Shechtman, E., Irani, M.: In defense of nearest-neighbor based image classification. 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1-8 (2008)

  • (29) Opelt, A., Fussenegger, M., Pinz, A., Auer, P.:Weak hypotheses and boosting for generic object detection and recognition. European Conference on Computer Vision, 71-84 (2004)
  • (30) Lazebnik, S., Schmid, C., Ponce, J.:Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2, 2169-2178 (2006)
  • (31) Sokolova, M., Lapalme, G.:A systematic analysis of performance measures for classification tasks. Inform. Process. Manag. 45(4), 427-437 (2009)
  • (32) Schuld, M., Sinayskiy, I., Petruccione, F.:An introduction to quantum machine learning. Contemp. Phys. 56(2), 172-185 (2015)
  • (33) Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.:Quantum machine learning. Nature 549(7671), 195 (2017)
  • (34)

    Iliyasu, A.M., Fatichah, C.:A quantum hybrid PSO combined with fuzzyk-NN approach to feature selection and cell classification in cervical cancer detection. Sensors

    17(12), 2935 (2017)
  • (35) Lloyd, S., Mohseni, M., Rebentrost, P.:Quantum principal component analysis. Nat. Phys. 10(9), 108-113 (2013)
  • (36) Rebentrost, P., Mohseni, M., Lloyd, S.:Quantum support vector machine for big feature and big data classification. Phys. Rev. Lett. (2013)
  • (37) Kulchytskyy, B., Andriyash, E., Amin, M., Melko, R.:Quantum boltzmann machine. ArXiv 33(2), 489-493 (2016)
  • (38)

    Aïmeur, E., Brassard, G., Gambs, S.:Machine learning in a quantum world. International Conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence. Springer-Verlag, 431-442 (2006)

  • (39)

    Dong, D., Chen, C., Li, H., Tarn, T.J.:Quantum reinforcement learning. IEEE Transactions on Systems Man and Cybernetics Part B Cybernetics A Publication of the IEEE Systems Man and Cybernetics Society

    38(5), 1207-1220 (2008)
  • (40) Lloyd, S., Mohseni, M., Rebentrost, P.:Quantum algorithms for supervised and unsupervised machine learning. Eprint Arxiv (2013)
  • (41) Ruan, Y., Chen, H.W., Tan, J., Li, X.:Quantum computation for large-scale image classification. Quantum Inf. Process. 15(10), 4049-4069 (2016).
  • (42) Chen, H., Gao, Y., Zhang, J.:Quantum k-nearest neighbor algorithm. Dongnan Daxue Xuebao 45(4), 647-651 (2015)
  • (43) Ruan, Y., Xue, X., Liu, H., Tan, J., Li, X.:Quantum algorithm for k-nearest neighbors classification based on the metric of Hamming distance. Int. J. Theor. Phys. 56(11), 3496-3507 (2017)
  • (44) Dürr, C., Høyer, P.:A quantum algorithm for finding the minimum. Computer Science (1999)
  • (45) Chen, T.S.:Comparison and application of image classification. Beijing University of Posts and Telecommunications (2011)
  • (46)

    Aharonov, D., Ta-Shma, A.:Adiabatic quantum state generation and statistical zero knowledge. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, 20-29 (2003)

  • (47) Childs, A.M., Cleve, R., Deotto, E., Farhi, E., Gutmann, S., Spielman, D.A.:Exponential algorithmic speedup by a quantum walk, Physics, 59-68 (2002)
  • (48) Wiebe, N., Berry, D., Høyer, P., Sanders, B.:Simulating quantum dynamics on a quantum computer. Journal of Physics A Mathematical and Theoretical 44(44), 3096-3100 (2010)
  • (49) Wang, D.,Liu, Z.H.:Design of quantum comparator based on extended general Toffoli gates with multiple targets. Comput. Sci. 39(9), 302-306 (2012)
  • (50) Bennett, C.H.:Logical reversibility of computation. IBM Journal of Research and Development 17(6), 525-532 (1973)
  • (51) Buhrman, H., Cleve, R., Watrous, J., De, W.R.:Quantum fingerprinting. Phys. Rev. Lett. 87(16), 167902 (2001)
  • (52) Brassard, G., Høyer, P., Mosca, M., Tapp, A.:Quantum amplitude amplification and estimation. Quantum Inf. Comput. 5494, 53-74 (2012)
  • (53) Dürr, C., Heiligman, M., Høyer, P., Mhalla, M.:Quantum query complexity of some graph problems. Siam Journal on Computing 35(6), 1310-1328 (2004)
  • (54) Li, F.F., Rob, F., Pietro, P.:Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding 106(1), 59-70 (2007)
  • (55) Arya, S., Mount, D.M., Netanyahu, N.S., Silverman, R., Wu, A.Y.:An optimal algorithm for approximate nearest neighbor searching fixed dimensions. J. ACM 45(6), 891-923 (1998)