NDDR-CNN: Layer-wise Feature Fusing in Multi-Task CNN by Neural Discriminative Dimensionality Reduction

01/25/2018
by   Yuan Gao, et al.
0

State-of-the-art Convolutional Neural Network (CNN) benefits a lot from multi-task learning (MTL), which learns multiple related tasks simultaneously to obtain shared or mutually related representations for different tasks. The most widely-used MTL CNN structure is based on an empirical or heuristic split on a specific layer (e.g., the last convolutional layer) to minimize different task-specific losses. However, this heuristic sharing/splitting strategy may be harmful to the final performance of one or multiple tasks. In this paper, we propose a novel CNN structure for MTL, which enables automatic feature fusing at every layer. Specifically, we first concatenate features from different tasks according to their channel dimension, and then formulate the feature fusing problem as discriminative dimensionality reduction. We show that this discriminative dimensionality reduction can be done by 1x1 Convolution, Batch Normalization, and Weight Decay in one CNN, which we refer to as Neural Discriminative Dimensionality Reduction (NDDR). We perform ablation analysis in details for different configurations in training the network. The experiments carried out on different network structures and different task sets demonstrate the promising performance and desirable generalizability of our proposed method.

READ FULL TEXT

page 2

page 3

page 8

research
04/09/2020

TensorProjection Layer: A Tensor-Based Dimensionality Reduction Method in CNN

In this paper, we propose a dimensionality reduction method applied to t...
research
08/26/2019

Stochastic Filter Groups for Multi-Task CNNs: Learning Specialist and Generalist Convolution Kernels

The performance of multi-task learning in Convolutional Neural Networks ...
research
12/17/2020

Exploiting Learnable Joint Groups for Hand Pose Estimation

In this paper, we propose to estimate 3D hand pose by recovering the 3D ...
research
11/18/2018

Neural Multi-Task Learning for Citation Function and Provenance

Citation function and provenance are two cornerstone tasks in citation a...
research
10/16/2018

Learning Inward Scaled Hypersphere Embedding: Exploring Projections in Higher Dimensions

Majority of the current dimensionality reduction or retrieval techniques...
research
01/25/2023

Modelling Long Range Dependencies in N-D: From Task-Specific to a General Purpose CNN

Performant Convolutional Neural Network (CNN) architectures must be tail...
research
06/05/2023

Kinodynamic FMT* with Dimensionality Reduction Heuristics and Neural Network Controllers

This paper proposes a new sampling-based kinodynamic motion planning alg...

Please sign up or login with your details

Forgot password? Click here to reset