Improving Siamese Networks for One Shot Learning using Kernel Based Activation functions

10/22/2019
by   Shruti Jadon, et al.
62

The lack of a large amount of training data has always been the constraining factor in solving a lot of problems in machine learning, making One Shot Learning one of the most intriguing ideas in machine learning. It aims to learn information about object categories from one, or only a few training examples. This process of learning in deep learning is usually accomplished by proper objective function, i.e; loss function and embeddings extraction i.e; architecture. In this paper, we discussed about metrics based deep learning architectures for one shot learning such as Siamese neural networks and present a method to improve on their accuracy using Kafnets (kernel-based non-parametric activation functions for neural networks) by learning proper embeddings with relatively less number of epochs. Using kernel activation functions, we are able to achieve strong results which exceed those of ReLU based deep learning models in terms of embeddings structure, loss convergence, and accuracy.

READ FULL TEXT

page 9

page 10

page 14

research
09/28/2020

EIS – a family of activation functions combining Exponential, ISRU, and Softplus

Activation functions play a pivotal role in the function learning using ...
research
02/26/2021

Neural Generalization of Multiple Kernel Learning

Multiple Kernel Learning is a conventional way to learn the kernel funct...
research
11/23/2020

A Use of Even Activation Functions in Neural Networks

Despite broad interest in applying deep learning techniques to scientifi...
research
12/14/2020

One-Shot Learning with Triplet Loss for Vegetation Classification Tasks

Triplet loss function is one of the options that can significantly impro...
research
07/16/2017

Comparative Performance Analysis of Neural Networks Architectures on H2O Platform for Various Activation Functions

Deep learning (deep structured learning, hierarchi- cal learning or deep...
research
07/13/2017

Kafnets: kernel-based non-parametric activation functions for neural networks

Neural networks are generally built by interleaving (adaptable) linear l...
research
04/17/2019

Few Shot Speaker Recognition using Deep Neural Networks

The recent advances in deep learning are mostly driven by availability o...

Please sign up or login with your details

Forgot password? Click here to reset