Achieving the time of 1-NN, but the accuracy of k-NN

12/06/2017
by   Lirong Xue, et al.
0

We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of k-NN prediction, while matching (or improving) the faster prediction time of 1-NN. The approach consists of aggregating denoised 1-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as k-NN, without sacrificing the computational efficiency of 1-NN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2020

A Protection against the Extraction of Neural Network Models

Given oracle access to a Neural Network (NN), it is possible to extract ...
research
01/12/2021

Neural Network-based Virtual Microphone Estimator

Developing microphone array technologies for a small number of microphon...
research
11/26/2020

CYPUR-NN: Crop Yield Prediction Using Regression and Neural Networks

Our recent study using historic data of paddy yield and associated condi...
research
06/05/2020

Neural Network Calculator for Designing Trojan Detectors

This work presents a web-based interactive neural network (NN) calculato...
research
01/22/2020

Neural Networks in Evolutionary Dynamic Constrained Optimization: Computational Cost and Benefits

Neural networks (NN) have been recently applied together with evolutiona...
research
06/13/2022

Superiority of GNN over NN in generalizing bandlimited functions

We constructively show, via rigorous mathematical arguments, that GNN ar...
research
05/13/2020

Artificial Neural Network Pruning to Extract Knowledge

Artificial Neural Networks (NN) are widely used for solving complex prob...

Please sign up or login with your details

Forgot password? Click here to reset