Revisiting k-Nearest Neighbor Graph Construction on High-Dimensional Data : Experiments and Analyses

12/04/2021
by   Liu Yingfan, et al.
0

The k-nearest neighbor graph (KNNG) on high-dimensional data is a data structure widely used in many applications such as similarity search, dimension reduction and clustering. Due to its increasing popularity, several methods under the same framework have been proposed in the past decade. This framework contains two steps, i.e. building an initial KNNG (denoted as ) and then refining it by neighborhood propagation (denoted as ). However, there remain several questions to be answered. First, it lacks a comprehensive experimental comparison among representative solutions in the literature. Second, some recently proposed indexing structures, e.g., SW and HNSW, have not been used or tested for building an initial KNNG. Third, the relationship between the data property and the effectiveness of is still not clear. To address these issues, we comprehensively compare the representative approaches on real-world high-dimensional data sets to provide practical and insightful suggestions for users. As the first attempt, we take SW and HNSW as the alternatives of in our experiments. Moreover, we investigate the effectiveness of and find the strong correlation between the huness phenomenon and the performance of .

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset