Swim: A General-Purpose, High-Performing, and Efficient Activation Function for Locomotion Control Tasks

03/05/2023
by   Maryam Abdool, et al.
0

Activation functions play a significant role in the performance of deep learning algorithms. In particular, the Swish activation function tends to outperform ReLU on deeper models, including deep reinforcement learning models, across challenging tasks. Despite this progress, ReLU is the preferred function partly because it is more efficient than Swish. Furthermore, in contrast to the fields of computer vision and natural language processing, the deep reinforcement learning and robotics domains have seen less inclination to adopt new activation functions, such as Swish, and instead continue to use more traditional functions, like ReLU. To tackle those issues, we propose Swim, a general-purpose, efficient, and high-performing alternative to Swish, and then provide an analysis of its properties as well as an explanation for its high-performance relative to Swish, in terms of both reward-achievement and efficiency. We focus on testing Swim on MuJoCo's locomotion continuous control tasks since they exhibit more complex dynamics and would therefore benefit most from a high-performing and efficient activation function. We also use the TD3 algorithm in conjunction with Swim and explain this choice in the context of the robot locomotion domain. We then conclude that Swim is a state-of-the-art activation function for continuous control locomotion tasks and recommend using it with TD3 as a working framework.

READ FULL TEXT

page 2

page 4

research
11/08/2021

SMU: smooth activation function for deep networks using smoothing maximum technique

Deep learning researchers have a keen interest in proposing two new nove...
research
08/21/2021

SERF: Towards better training of deep neural networks using log-Softplus ERror activation Function

Activation functions play a pivotal role in determining the training dyn...
research
10/16/2017

Searching for Activation Functions

The choice of activation functions in deep networks has a significant ef...
research
09/06/2018

ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

This work provides a thorough study on how reward scaling can affect per...
research
01/17/2019

Activation Functions for Generalized Learning Vector Quantization - A Performance Comparison

An appropriate choice of the activation function (like ReLU, sigmoid or ...
research
12/15/2022

Sim-to-Real Transfer for Quadrupedal Locomotion via Terrain Transformer

Deep reinforcement learning has recently emerged as an appealing alterna...
research
04/27/2020

Learning for Microrobot Exploration: Model-based Locomotion, Sparse-robust Navigation, and Low-power Deep Classification

Building intelligent autonomous systems at any scale is challenging. The...

Please sign up or login with your details

Forgot password? Click here to reset