Neural Similarity Learning

10/28/2019
by   Weiyang Liu, et al.
0

Inner product-based convolution has been the founding stone of convolutional neural networks (CNNs), enabling end-to-end learning of visual representation. By generalizing inner product with a bilinear matrix, we propose the neural similarity which serves as a learnable parametric similarity measure for CNNs. Neural similarity naturally generalizes the convolution and enhances flexibility. Further, we consider the neural similarity learning (NSL) in order to learn the neural similarity adaptively from training data. Specifically, we propose two different ways of learning the neural similarity: static NSL and dynamic NSL. Interestingly, dynamic neural similarity makes the CNN become a dynamic inference network. By regularizing the bilinear matrix, NSL can be viewed as learning the shape of kernel and the similarity measure simultaneously. We further justify the effectiveness of NSL with a theoretical viewpoint. Most importantly, NSL shows promising performance in visual recognition and few-shot learning, validating the superiority of NSL over the inner product-based convolution counterparts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2017

Deep Hyperspherical Learning

Convolution as inner product has been the founding basis of convolutiona...
research
04/22/2018

Decoupled Networks

Inner product-based convolution has been a central component of convolut...
research
07/14/2017

Generalizing the Convolution Operator in Convolutional Neural Networks

Convolutional neural networks have become a main tool for solving many m...
research
05/24/2018

Backpropagation with N-D Vector-Valued Neurons Using Arbitrary Bilinear Products

Vector-valued neural learning has emerged as a promising direction in de...
research
02/27/2019

Representation Learning with Weighted Inner Product for Universal Approximation of General Similarities

We propose weighted inner product similarity (WIPS) for neural-network b...
research
10/04/2018

Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability

We propose shifted inner-product similarity (SIPS), which is a novel yet...
research
03/16/2022

Represent, Compare, and Learn: A Similarity-Aware Framework for Class-Agnostic Counting

Class-agnostic counting (CAC) aims to count all instances in a query ima...

Please sign up or login with your details

Forgot password? Click here to reset