When Neural Networks Using Different Sensors Create Similar Features

11/04/2021
by   Hugues Moreau, et al.
0

Multimodal problems are omnipresent in the real world: autonomous driving, robotic grasping, scene understanding, etc... We draw from the well-developed analysis of similarity to provide an example of a problem where neural networks are trained from different sensors, and where the features extracted from these sensors still carry similar information. More precisely, we demonstrate that for each sensor, the linear combination of the features from the last layer that correlates the most with other sensors corresponds to the classification components of the classification layer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2019

A Multimodal Vision Sensor for Autonomous Driving

This paper describes a multimodal vision sensor that integrates three ty...
research
02/26/2019

Capsule Neural Network based Height Classification using Low-Cost Automotive Ultrasonic Sensors

High performance ultrasonic sensor hardware is mainly used in medical ap...
research
11/28/2018

Shared Representational Geometry Across Neural Networks

Different neural networks trained on the same dataset often learn simila...
research
05/27/2019

Radial Prediction Layer

For a broad variety of critical applications, it is essential to know ho...
research
11/14/2022

Do Neural Networks Trained with Topological Features Learn Different Internal Representations?

There is a growing body of work that leverages features extracted via to...
research
06/18/2022

A Dynamic Data Driven Approach for Explainable Scene Understanding

Scene-understanding is an important topic in the area of Computer Vision...

Please sign up or login with your details

Forgot password? Click here to reset