Optimality Principles in Spacecraft Neural Guidance and Control

05/22/2023
by   Dario Izzo, et al.
0

Spacecraft and drones aimed at exploring our solar system are designed to operate in conditions where the smart use of onboard resources is vital to the success or failure of the mission. Sensorimotor actions are thus often derived from high-level, quantifiable, optimality principles assigned to each task, utilizing consolidated tools in optimal control theory. The planned actions are derived on the ground and transferred onboard where controllers have the task of tracking the uploaded guidance profile. Here we argue that end-to-end neural guidance and control architectures (here called G CNets) allow transferring onboard the burden of acting upon these optimality principles. In this way, the sensor information is transformed in real time into optimal plans thus increasing the mission autonomy and robustness. We discuss the main results obtained in training such neural architectures in simulation for interplanetary transfers, landings and close proximity operations, highlighting the successful learning of optimality principles by the neural model. We then suggest drone racing as an ideal gym environment to test these architectures on real robotic platforms, thus increasing confidence in their utilization on future space exploration missions. Drone racing shares with spacecraft missions both limited onboard computational capabilities and similar control structures induced from the optimality principle sought, but it also entails different levels of uncertainties and unmodelled effects. Furthermore, the success of G CNets on extremely resource-restricted drones illustrates their potential to bring real-time optimal control within reach of a wider variety of robotic systems, both in space and on Earth.

READ FULL TEXT

page 26

page 27

page 30

page 31

research
12/15/2019

Aggressive Online Control of a Quadrotor via Deep Network Representations of Optimality Principles

Optimal control holds great potential to improve a variety of robotic ap...
research
05/27/2022

Vehicle mission guidance by symbolic optimal control

Symbolic optimal control is a powerful method to synthesize algorithmica...
research
08/09/2022

Neural-Rendezvous: Learning-based Robust Guidance and Control to Encounter Interstellar Objects

Interstellar objects (ISOs), astronomical objects not gravitationally bo...
research
04/26/2023

An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers

Developing optimal controllers for aggressive high-speed quadcopter flig...
research
07/31/2021

Learning to Control Direct Current Motor for Steering in Real Time via Reinforcement Learning

Model free techniques have been successful at optimal control of complex...
research
07/13/2019

Seeker based Adaptive Guidance via Reinforcement Meta-Learning Applied to Asteroid Close Proximity Operations

Current practice for asteroid close proximity maneuvers requires extreme...
research
01/10/2020

Optimal Disturbance Attenuation Approach with Measurement Feedback to Missile Guidance

Pursuit-evasion differential games using the Disturbance Attenuation app...

Please sign up or login with your details

Forgot password? Click here to reset