Note: An alternative proof of the vulnerability of k-NN classifiers in high intrinsic dimensionality regions

10/02/2020
by   Teddy Furon, et al.
0

This document proposes an alternative proof of the result contained in article "High intrinsic dimensionality facilitates adversarial attack: Theoretical evidence", Amsaleg et a.. The proof is simpler to understand (I believe) and leads to a more precise statement about the asymptotical distribution of the relative amount of perturbation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2021

A Note on the Borel-Cantelli Lemma

In this short note, we discuss the Barndorff-Nielsen lemma, which is a g...
research
07/02/2019

A Short Proof of the Toughness of Delaunay Triangulations

We present a self-contained short proof of the seminal result of Dillenc...
research
06/23/2020

ABID: Angle Based Intrinsic Dimensionality

The intrinsic dimensionality refers to the “true” dimensionality of the ...
research
10/20/2020

The Elliptical Potential Lemma Revisited

This note proposes a new proof and new perspectives on the so-called Ell...
research
01/08/2018

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Deep Neural Networks (DNNs) have recently been shown to be vulnerable ag...
research
04/19/2023

A global accuracy characterisation of trust

Dorst et al. (2021) put forward a deference principle called Total Trust...
research
08/20/2020

A Simple Proof of Optimal Approximations

The fundamental result of Li, Long, and Srinivasan on approximations of ...

Please sign up or login with your details

Forgot password? Click here to reset