Training Data Generating Networks: Linking 3D Shapes and Few-Shot Classification

10/16/2020
by   Biao Zhang, et al.
9

We propose a novel 3d shape representation for 3d shape reconstruction from a single image. Rather than predicting a shape directly, we train a network to generate a training set which will be feed into another learning algorithm to define the shape. Training data generating networks establish a link between few-shot learning and 3d shape analysis. We propose a novel meta-learning framework to jointly train the data generating network and other components. We improve upon recent work on standard benchmarks for 3d shape reconstruction, but our novel shape representation has many applications.

READ FULL TEXT

page 7

page 13

research
06/14/2020

3D Reconstruction of Novel Object Shapes from Single Images

The key challenge in single image 3D shape reconstruction is to ensure t...
research
06/04/2018

Diffeomorphic Learning

We introduce in this paper a learning paradigm in which the training dat...
research
06/11/2021

Learning Compositional Shape Priors for Few-Shot 3D Reconstruction

The impressive performance of deep convolutional neural networks in sing...
research
07/31/2019

Few-Shot Meta-Denoising

We study the problem of learning-based denoising where the training set ...
research
09/17/2019

Single-shot 3D shape reconstruction using deep convolutional neural networks

A robust single-shot 3D shape reconstruction technique integrating the f...
research
11/30/2021

A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

Neural networks (NN) for single-view 3D reconstruction (SVR) have gained...
research
03/27/2019

BAE-NET: Branched Autoencoder for Shape Co-Segmentation

We treat shape co-segmentation as a representation learning problem and ...

Please sign up or login with your details

Forgot password? Click here to reset