Memory-based Controllers for Efficient Data-driven Control of Soft Robots

09/19/2023
by   Yuzhe Wu, et al.
0

Controller design for soft robots is challenging due to nonlinear deformation and high degrees of freedom of flexible material. The data-driven approach is a promising solution to the controller design problem for soft robots. However, the existing data-driven controller design methods for soft robots suffer from two drawbacks: (i) they require excessively long training time, and (ii) they may result in potentially inefficient controllers. This paper addresses these issues by developing two memory-based controllers for soft robots that can be trained in a data-driven fashion: the finite memory controller (FMC) approach and the long short-term memory (LSTM) based approach. An FMC stores the tracking errors at different time instances and computes the actuation signal according to a weighted sum of the stored tracking errors. We develop three reinforcement learning algorithms for computing the optimal weights of an FMC using the Q-learning, soft actor-critic, and deterministic policy gradient (DDPG) methods. An LSTM-based controller is composed of an LSTM network where the inputs of the network are the robot's desired configuration and current configuration. The LSTM network computes the required actuation signal for the soft robot to follow the desired configuration. We study the performance of the proposed approaches in controlling a soft finger where, as benchmarks, we use the existing reinforcement learning (RL) based controllers and proportional-integral-derivative (PID) controllers. Our numerical results show that the training time of the proposed memory-based controllers is significantly shorter than that of the classical RL-based controllers. Moreover, the proposed controllers achieve a smaller tracking error compared with the classical RL algorithms and the PID controller.

READ FULL TEXT
research
10/03/2022

Accelerate Reinforcement Learning with PID Controllers in the Pendulum Simulations

We propose a Proportional Integral Derivative (PID) controller-based coa...
research
03/19/2021

A Self-adaptive SAC-PID Control Approach based on Reinforcement Learning for Mobile Robots

Proportional-integral-derivative (PID) control is the most widely used i...
research
07/20/2023

A Hybrid Adaptive Controller for Soft Robot Interchangeability

Soft robots have been leveraged in considerable areas like surgery, reha...
research
01/04/2021

High-bandwidth nonlinear control for soft actuators with recursive network models

We present a high-bandwidth, lightweight, and nonlinear output tracking ...
research
06/15/2020

Micro-controllers: Promoting Structurally Flexible Controllers in Self-Adaptive Software Systems

To promote structurally flexible controllers in self-adaptive software s...
research
09/14/2022

C^2:Co-design of Robots via Concurrent Networks Coupling Online and Offline Reinforcement Learning

With the rise of computing power, using data-driven approaches for co-de...
research
04/07/2022

Hybrid LMC: Hybrid Learning and Model-based Control for Wheeled Humanoid Robot via Ensemble Deep Reinforcement Learning

Control of wheeled humanoid locomotion is a challenging problem due to t...

Please sign up or login with your details

Forgot password? Click here to reset