Interpreting Representation Quality of DNNs for 3D Point Cloud Processing

11/05/2021
by   Wen Shen, et al.
0

In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing. We propose a method to disentangle the overall model vulnerability into the sensitivity to the rotation, the translation, the scale, and local 3D structures. Besides, we also propose metrics to evaluate the spatial smoothness of encoding 3D structures, and the representation complexity of the DNN. Based on such analysis, experiments expose representation problems with classic DNNs, and explain the utility of the adversarial training.

READ FULL TEXT
research
11/20/2019

Utility Analysis of Network Architectures for 3D Point Cloud Processing

In this paper, we diagnose deep neural networks for 3D point cloud proce...
research
07/07/2021

Rotation Transformation Network: Learning View-Invariant Point Cloud for Classification and Segmentation

Many recent works show that a spatial manipulation module could boost th...
research
03/28/2021

Noise Injection-based Regularization for Point Cloud Processing

Noise injection-based regularization, such as Dropout, has been widely u...
research
06/18/2021

Towards interpreting computer vision based on transformation invariant optimization

Interpreting how does deep neural networks (DNNs) make predictions is a ...
research
05/04/2022

Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs

This paper aims to theoretically analyze the complexity of feature trans...
research
04/12/2022

3DeformRS: Certifying Spatial Deformations on Point Clouds

3D computer vision models are commonly used in security-critical applica...
research
01/22/2021

i-Algebra: Towards Interactive Interpretability of Deep Neural Networks

Providing explanations for deep neural networks (DNNs) is essential for ...

Please sign up or login with your details

Forgot password? Click here to reset