The Final Frontier: Deep Learning in Space

01/27/2020
by   Vivek Kothari, et al.
0

Machine learning, particularly deep learning, is being increasing utilised in space applications, mirroring the groundbreaking success in many earthbound problems. Deploying a space device, e.g. a satellite, is becoming more accessible to small actors due to the development of modular satellites and commercial space launches, which fuels further growth of this area. Deep learning's ability to deliver sophisticated computational intelligence makes it an attractive option to facilitate various tasks on space devices and reduce operational costs. In this work, we identify deep learning in space as one of development directions for mobile and embedded machine learning. We collate various applications of machine learning to space data, such as satellite imaging, and describe how on-device deep learning can meaningfully improve the operation of a spacecraft, such as by reducing communication costs or facilitating navigation. We detail and contextualise compute platform of satellites and draw parallels with embedded systems and current research in deep learning for resource-constrained environments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/10/2018

Deep Learning Towards Mobile Applications

Recent years have witnessed an explosive growth of mobile devices. Mobil...
07/06/2017

An Embedded Deep Learning based Word Prediction

Recent developments in deep learning with application to language modeli...
11/23/2019

Compressing Representations for Embedded Deep Learning

Despite recent advances in architectures for mobile devices, deep learni...
03/12/2018

Deep Learning in Mobile and Wireless Networking: A Survey

The rapid uptake of mobile devices and the rising popularity of mobile a...
07/13/2018

Deep Learning in the Wild

Deep learning with neural networks is applied by an increasing number of...
10/13/2018

Embedded deep learning in ophthalmology: Making ophthalmic imaging smarter

Deep learning has recently gained high interest in ophthalmology, due to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Machine learning scales to and thrives in data-abundant environments making it well suited to applications in space. Satellite imagery is ubiquitous in space. Produced by both imaging the Earth and space (telescope satellites), it allows learned models to power a range of monitoring tasks. Machine learning can also play an active role in the operation of a spacecraft, allowing for precise automated control and facilitating on-board tasks, such as docking or navigation.

Machine learning’s impact in space applications will continue to grow as cheaper satellite platforms mature and become more accessible to small actors, thus widening the range of possible activities in space. Similarly, hardware innovations from terrestrial systems, such as the multi-core design and specially-designed accelerators, are increasing the compute power available to spacecraft.

This increased accessibility of space platforms and space data is fuelling excitement in “space activity” as a new direction for applied machine learning researchers with potentially transformative results for future applications and satellite hardware. In particular, recent advances in machine learning (ML) and deep learning (DL) in constrained environments (Liberis and Lane, 2019; Chowdhery et al., 2019; Wang, 2018; Sze, 2017)

would enable running neural networks on the spacecraft itself. Doing so will enable many “smart” applications to be run in space autonomously (as the communication channel between the spacecraft and ground station is often limited), which is likely to have a similar impact and development trajectory to that of terrestrial smart devices, homes, embedded systems and mobile phones 

(N. Lane et. al., 2017).

In addition to challenges faced by DL for embedded systems, space imposes extra requirements: the need for radiation-hardened hardware, robustness, and extensive verification. We discuss those challenges, demonstrating how space is a similar and yet unique environment for DL applications, which makes applying developments from embedded systems non-trivial.

In this work, we argue that ML/DL in space is an important direction for mobile and embedded computing (MEC) moving forward. We summarise prominent applications of ML in space, give an indication of the range of compute and sensor capabilities of space hardware, and present pointers for future work, thus providing the reader with the necessary information to start exploring this area. We quantitatively illustrate how DL based solutions can offer double the power efficiency compared to the current state of the art.

2. Deep Learning Meets Space

Spacecraft (vehicles designed for operation outside the earth’s atmosphere) and satellites (objects that orbit a natural body) have two types of systems: payload, which comprises instruments that faciliate the primary purpose of the spacecrafts; and operations systems, which support the payload and allow it to reach, stay, and work in space. This section will analyse current applications of ML pertaining to both systems.

2.1. Analysis of Payload Data

Earth observation satellites in either geostationary (GEO) or Low Earth (LEO) orbits carry sensors ranging from RGB imagers for cloud cover detection to more specialised sensors for other atmospheric properties: temperature, humidity, wind vectors, and gaseous composition. Data, often from radiometric or spectral sensors, has traditionally been processed at ground station primarily using classical ML and hand-crafted specialised algorithms.

The similarity of terrestrial and space sensor modalities makes ML/DL well-suited for payload data. The following areas are only a few that have seen remarkable success from such methods. We will later (Sec. 4) quantitatively demonstrate how DL methods can help save power on satellites.

Weather & atmospheric monitoring. Cloud detection (Lewis et al., 1997)

and estimating precipitation 

(Ba and Gruber, 2001), and green-house gas concentrations use classical ML.(Wimmers et al., 2019) use Faster-RCNNs to achieve remarkable results in estimating tropical storm intensity, but the required data is limited and has to be aligned to microwave wavelengths.

Vegetation and ground cover classification. Hyperspectral data (HSD) is used to identify land cover  (Baker et al., 2007), with data from MODIS and LANDSAT satellites successfully used to show the diminishing wetlands.

DL models naturally excel at challenges presented by the HSD. In a hyperspectral image, a pixel represents a large spatial area which may have several types of vegetation. This leads to the entanglement of spectral signatures and, when compounded with the high-dimensionality and a large intra-class variability of HSD, makes ground cover classification a formidable problem. 3D-CNNs and ResNets  (Manning et al., 2018) successfully address these challenges on ISS data to achieve 96.4% classification accuracy. (Paoletti, 2019) review a number of DL architectures and demonstrate their effectiveness in situations with semi-supervised or sparse data.

Object detection and tracking. When pointed towards earth, HSD has been used to detect humans during natural disasters, track endangered animals (Guirado et al., 2019), military troops and ships and monitor oil spills (Salem et al., 2001). When pointed at the sky, they leverage the lack of atmospheric interference to detect galactic phenomena. The James Webb Space Telescope project is one of the first to use DL in data post-processing to detect galaxy clusters (Chan and Stott, 2019).

2.2. General Spacecraft Operation

Operation systems include the Guidance, Navigation and Control (GNC), communication, power, and propulsion systems. NASA has defined four levels of spacecraft autonomy (Drabbe and Drabbe, 2008). The lowest level corresponds to a primarily ground controlled mission, whereas the highest expects an ability to independently re-evaluate goals. Currently automation is provided by pervasive on-board control procedures (OBCPs). Used in satellites such as the Rosetta, Venus, Herschel & Planck (Ferraguto et al., 2008), OBCPs initiate a predefined series of actions when an event is detected. DL can improve not only OBCPs, through better event detection and subsequent planning, but also systems which follow.

Communication.

Software-defined radio (SDR) is replacing multiple antennae designs. Its communication protocols can depend on many parameters, such as packet re-transmission rate and band. The use of DL, such as reinforcement learning (RL) to dynamically optimise parameters has been proposed for space applications but not yet adopted 

(Ferreira et al., 2019; Ortíz-Gómez et al., [n.d.]).

Automated control and navigation. The GNC and propulsion systems control the movement of a spacecraft. Historically the majority of these manoeuvring operations were directed by a human, which is cumbersome and only feasible for near Earth missions. Onboard DL systems would bring a much needed degree of autonomy and robustness to GNCs, which is particularly important for deep space missions that suffer from lag and gaps in communication.

Positioning becomes important during docking or landing. Typically, such systems almost exclusively rely on LIDAR, but newer techniques, such as the natural feature tracking (NFT), use optical systems. Evers (2019) uses the YOLO vision model to estimate pose and relative distance, which achieves a 98% accuracy on the author’s dataset and lays the groundwork for real-time models. Despite impressive accuracy, DL systems would require large banks of images to train on (Lorenz et al., 2017).

Spacecraft are particularly sensitive during docking and landing: several tonnes need to be moved with centimetre-level precision. While traditionally the domain of control systems, preliminary work applies reinforcement learning methods to 6-DOF cold gas thruster systems (Nanjangud et al., 2018).

While spacecraft travel space, Landers and rovers must traverse non-Earth surfaces. While they do not use deep models, conventional ML/control systems like AEGIS have been used on the MER project (Estlin et al., 2012). Projects like the Surrey Rover Autonomy Software & Hardware Testbed (SMART) (Gao et al., 2012) provide terrestrial simulation facilities and are looking at modular (deep) systems. Blacker et al. (2019) use a yet-to-be-deployed CNN based system which judges the navigability of each part of the terrain then plans a safe path based on the results. The system can be tuned to run at different latency and memory capacities. GNC’s similarity to terrestrial problems of vision and autonomous driving make it a particularly attractive area of development.

3. Space Hardware and Software

Computational resources in space have traditionally been highly specialised, tightly-integrated monoliths. In contrast with terrestrial hardware systems, the harsh and remote environment of space requires compute systems (incl. the processor and memory chips) to be simultaneously efficient, radiation-resistant, and fault-tolerant. In addition systems sent into space have to be thoroughly verified. As a result, space systems, especially hardware, lag considerably behind modern compute.

3.1. System Platforms

Spacecraft have highly mission-dependent designs (Tab. 1), with purpose-designed hardware (ASICs) or high end micro-controllers powering older space missions. However, such systems are high cost, non-resilient, and large. With time, these specialisations became infeasible along several axes (power, cost, weight, volume), thus to reduce the development costs, systems are increasingly being assembled using off-the-shelf components (COTS). This applies to both smaller spacecraft, such as CubeSats, and large multi-million dollar satellites.

System Software and Operating systems. Due to their specificity, historically, large satellites have had minimal software interfaces: either embedded modules, like vxWorks RTEMS, or purpose-made minimal software with a tightly coupled software/hardware interface (Torelli, 2019). These systems varied dramatically and often did not support floating-point calculations, integer multiplication or division units, or interrupt or dynamic memory allocation. Contemporary embedded systems still in use include the core Flight System (cFS) and core Executive (cFE) from Goddard Space Flight Centre, and COSMOS by Ball Aerospace. Projects like, such as SPINAS (Notebaert, [n.d.])

, are attempting to create a more uniform (equal-capability), and open source, platform for smaller spacecraft with more sophisticated compute. Some newer spacecraft also see Linux-based OSes 

(NASA, 2018).

Memory and Compute Capabilities. Modern space compute systems are moving towards shared/re-configurable (George and Wilson, 2018), multi-core systems which would be capable of running DL models. For example, the recent JUICE mission to map Jupiter’s moons uses a common digital processing unit (DPU) and software packages, which are shared between 10 of its instruments (Torelli, 2019). Other systems may utilise multicore processors, e.g. LEON-GR740 (32 bit, quad-core, rad-hard SOC), with cores preferentially being assigned to specific tasks (such as navigation). Recently, some workloads, such as linear algebra, are being accelerated with field programmable gate arrays (FPGAs) or low-power GPU-like accelerators (Notebaert, [n.d.]). Special low-power accelerators can be incorporated into an FPGA-based on-board computer or as a separate chip, e.g. the Movidius compute stick, which was being tested for deployment in space. Tab. 1 shows some of these processing elements and their specifications.

Type Compute platform Specifications Power budget
microcontroller TI MSP430F2618 12MHz, 8KB SRAM, 116KB FLASH, X-band (8kbps) 35W
processor SOC BAE RAD 750 200MHz, 2GB flash 256MB DRAM 5W
accelerator Intel Movidius NCS VPU, 4GB RAM 1W
microcontroller VA41630 (Cortex M4) 100 MHz, 64KB SRAM, 256KB FLASH
FGPA Xilinx Virtex-5QV 81920 LUT6, 596 RAMB16, 320 DSPs, 65 nm SRAM 5–10 W
Table 1. Different types of space hardware and their configurations. The 1st entry was used on a Mars Cube One (CubeSat) mission, while 2nd is charted to be used on the Mars 2020 interplanetary rover. The power envelope of these devices is magnitudes lower, compared to the WorldView 3 imaging satellite (3100 W).

Power budget. The largest limiting factor for on-board compute is power. Power generation, storage, and dissemination is facilitated through the electrical power systems (EPS). The wattage of supplied power is typically adjusted for payload requirements, which ranges 20W to 95W. In small satellites, power is generated through multi-junction solar cells, which have 28-38% efficiency and so need to be quite large to sustain the required power output. Power is most commonly stored in rechargeable Li-ion or Li-Po batteries ranging from 58-243 Wh/Kg (NASA, 2018).

3.2. Radiation Hardening

In space, devices are no longer protected from Sun’s radiation by the Earth’s atmosphere, which can cause spurious errors or stuck transistors in the device’s circuitry. Radiation damages the hardware either through its cumulative effects (total ionizing dose, TID) or through single event effects (SEE). Recoverable SEEs are called single event upsets (SEU) and can affect the logic state of memory. Radiation hardening (rad-hard) allows a compute component to withstand such errors. Rad-hard components are twice as slow and many times as expensive as their regular counterparts (Gretok, 2019). Overheads incurred by the space-grade CPUs are typically much larger than those incurred by the DSP and FPGA because they required more significant decreases in operating frequencies (Gretok, 2019).

Physical hardening. This involves using different materials, for example, insulating substrates such as silicon on sapphire (Trivedi and Mehta, 2016).Other approaches involve shielding the circuit and alternative doping mechanisms.

Circuit based hardening. This involves adding extra circuitry/logic to correct for the effects of SEEs. These include: watchdog protection, overcurrent circuits, power control and error correcting circuits (e.g. CRC and forward error correction in communication boards). Error correction is implemented both at the hardware and software levels, such as in the main memory where hardware level ECC and software EDAC are used in synergy.

3.3. Suitability for DL Workloads

According to  Dennehy ([n.d.]), an ML-assisted optical navigation system would have: 2–5 Gbps sensor I/O, 1–10 GOPs CPU, 1GB/s memory bandwidth, 250 Mbps cross link bandwidth to Earth. As the previous sections showed we are almost in a position to put that into a small satellites.

Space hardware is becoming increasingly closer to the multicore terrestrial mobile edge computing (MEC). While rad-hard components do decrease performance and raise costs, cheaper off-the-shelf components are finding their way to cubesats. Progress in hardware and research in deep learning algorithms for constrained environments have begun to meet at a point which allows deep models to be deployed to space.

Several DL systems are being initially tested on Earth. Schartel (2017) train a SqueezeNet model with the intent of transferring it to a space embedded system; Buonaiuto et al. (2017) consider Nvidia-TX1 hardware with the CUDA Deep Neural Network (cuDNN) library and TensorRT software. FPGAs, like the Xilinx Artix-7 and the Xilinx Zynq-7020, have been used for neuromorphic chips and image analysis.

The first space systems are specifically geared towards DL workloads are making their way to production. CloudScout (Esposito, 2019), a cloud identification algorithm, uses a specialised Visual Processing Unit to identify cloud patterns.

4. Case Study: On-device Satellite Imagery

Obtaining satellite imagery is one of the most widespread uses of spaceborne hardware. However sending and receiving large volumes of data is power consuming. In situations, such as with rovers, the high-latency and low bandwidth communication channels make this prohibitive. As seen in section 3 we are approaching have the hardware, software, and algorithmic capability required to use DL methods on board the highly constrained environment of a spacecraft. In this section, we describe how they can both select relevant imaging data and compress it. We then quantitatively show that DL can save at least half the power.

4.1. Efficient Satellite Imaging

Most imagining sensors capture several bands (commonly between IR to UV) spread across the electromagnetic spectrum. Modern sensors can capture very high resolution (VHR) images at up to 31cm of ground per pixel (in panchromatic mode) (Satellite Imaging Corporation, 2019). Even higher resolution images can be obtained using synthetic-aperture radar (SAR), which uses the motion of radio antenna over the surface to map the surface in three dimensions at a resolution of just a few centimetres per pixel (Moreira, 2019). A number of both raw (Usgs, Usgs) and preprocessed (Mohajerani and Saeedi, 2019) hyper spectral datasets are readily available.

Captured data needs to be transmitted to the ground station for aggregation and analysis, which can be expensive. A satellite can reduce the amount of data transmitted by employing deep learning: on-board pre-processing can discard parts of the image of no interest, e.g. occluded by clouds. Global annual cloud coverage is estimated to be at 66%, so excluding cloud images would drastically reduce the amount of data transmitted (Jeppesen et al., 2019). For satellites deployed for a particular purpose, such as boat or whale detection, neural networks can also be used to facilitate the primary task of a satellite and only transmit regions of interest.

Transmission costs can be further reduced by employing a neural network to compress image data. While the following models offer spectacular gains, training models specifically for satellite data would yield considerably better results. Near-lossless compression (Qian et al., 2006) achieved a 20:1 compression ration(CR) with hyperspectral data. In lossy compression, the Feb 2019 CCSDS standard for on-board lossy compression of hyperspectral images, uses predictive coding on-board and a residual hyperspectral CNN back on Earth to de-quantize the results (Valsesia and Magli, 2019) and achieves 0.1 bits per pixel compression ratio, far surpassing classical compression standards, e.g JPEG.

4.2. Potential Efficiency Gains

Here we present a typical system and quantify the bare minimum power saving an out-of-the-box DL systems would be able to provide us. Note that this is an approximate calculation: a finely tuned and end-to-end designed system could yield considerably better power gains.

Model. For example, MobileNet-V2 family of models were shown to run successfully on microcontroller-sized hardware (with 8-bit quantization) (Chowdhery et al., 2019) and power some image segmentation models (e.g. MobileNet-V2-backed DeepLabV3+ model (Chen et al., 2018)), which can be used for cloud detection.

Communication. Assuming the use of an S-band transmitter operating at 13W (for 33dBm output power) with a 4.3Mbps data rate (ISIS Space, 2018), transferring a 512x512 patch of data at 12-bits per pixel, would take approximately 0.73s and consume around 9.5J of power (ignoring communication protocol overheads).

Compute. If, instead, we perform 3s worth of inference on a LEON3 processor***Est. under 300MAdds for a MobileNet-V2 on a 512x512 input, running on a 100MHz processor. (COBHAM, 2018) to achieve at least 20:1 compression ratio (incl. cloud removal), we spend 4.5J on computation and 0.2J on transmission.

Power saving. The above shows a nearly 2x power saving. Considering different model architectures would result in other computation vs transmission power usage trade-offs.

Performing neural network compression before data transmission can considerably improve transmission latency and power usage, allowing longer mission lengths and the use of less costly transmission hardware. Space devices present an interesting constraint space for ultra-compact computer vision models.

5. Challenges and Opportunities Ahead

We have barely scratched the surface of the what is possible with deep learning in space. We outline challenges and applications which hold the greatest potential.

5.1. New Applications in Space

Terrestrial DL vision models are built for optical (narrow band) data whereas most image data from space is hyperspectral. Having HSD is especially important because several optical artefacts (e.g. metal ground cover, x-ray star bursts) are only often present within a subset of the spectrum and require models capable of searching through deeper data cubes.

Not only have individual modalities not been completely exploited but due to the lack of compute, multi-modal DL systems have yet to make their way to space. Multimodal approaches would ubiquitously improve spacecraft and payload operation e.g. fusing magnetometer, horizon and sun sensor data for GNC operation, and fusing SAR and HSD for terrain characterisation.

There are also numerous applications which are just waiting to see DL methods adapted - Ranging from DL/RL robotic construction in zero gravity environments to DL for crew health monitoring.

5.2. Improved Compute Paradigms for Space

The characterisation of DL models on a spacecraft must encompass more than just accuracy. The ability of the model to perform depends not only its construction but also on an environment constrained in terms of memory, power, compute, and reliability. Thus the definition of efficiency must be expanded to include hardware and context aware characterisation.

The uptake of off-the-shelf components, would allows us to leverage recent developments in compute efficient (quantized) (Sze, 2017), memory sensitive (compressed) (Liberis and Lane, 2019), and energy aware  (Wang, 2018) terrestrial DL models. While made significantly easier, the adaption process would still need to accommodate formidable hurdles endemic to space hardware, such as the higher error rates and the increased memory latency with rad-hard components.

Not only must the efficiency of DL models on a compute unit be measured along multiple axes, but it must also be characterised in the context of the overall spacecraft’s operation. This becomes increasingly important as compute components in newer spacecraft are shared between various subsystems. Real time systems, e.g. navigation, may be sensitive to interrupts and IO bottlenecks.

As powerful hardware becomes common in space, it may be possible to leverage more than a single satellite for computation. Such networks would offer not only more computational power but also greater fault tolerance.

5.3. Redefining Robustness and Reliability

Developing a comprehensive Validation and Verification (V&V) framework for embedded and robotics systems in critical areas such as healthcare and civil infrastructure is an area of current research. However the remoteness and expense of space make it radically more risk averse and often with higher costs than MEC systems.

The current validation standards for space-based systems and software listed in ECSS-E-ST-10-02C Rev.1 and ECSS-E-ST-40C (ECS, 2014) are inadequate for automated DL systems on spacecraft. Core to V&V is the qualification process which has 4 components: analysis, testing, inspection, and demonstration. Each of these components differs significantly when applied to ML systems and especially to DL systems. DL systems are less deterministic, less amenable to isolation and component testing, are data driven, and suffer from a lack of a testing oracle. Adding to the challenge, DL systems in space need to be measured along multiple axes: correctness, robustness, efficiency, interpretability. The rigour and span of V&V in space set it apart from methods used for MEC systems.

A few preliminary approaches would combine formal verification (analysis) with simulation (testing) and interpretability mechanisms (inspection). Analysis methods utilise sample perturbation or mixed integer linear programming 

(Dutta, Dutta) to characterise individual components. (Xiang, 2018) surveys the adaptation of more formal verification methods. These methods are only able to characterise a finite set of cases. Inspection of DL using interpretability models such as LIME or contextuality models (Carvalho, 2019) allows for humans-in-the-loop systems both in V&V and mission control. Finally DL sub-systems are tested through simulations before testing the entire component in the field.

6. Conclusion

As space devices become more affordable to launch and their hardware becomes more powerful to run non-trivial workloads, deep learning in space will continue to grow as a topic within mobile and embedded machine learning. In this work, we presented how machine learning can be applied to space data, drew a parallel between space and terrestrial embedded hardware and showed how on-device deep learning can improve the operation of a space device.

Acknowledgements.
This work was supported by the Sponsor EPRSC Rl through Grants Grant #3 and Grant #3, and Sponsor Samsung AI Rl. We would also like to thank Dr. Aakanksha Chowdhery for her input while shepherding the paper.

References

  • (1)
  • Ba and Gruber (2001) B. Mamoudou et.al.. GOES multispectral rainfall algorithm (GMSRA). Journal of Applied Meteorology 40, 8 (2001), 1500–1514.
  • Baker et al. (2007) C. Baker et.al. Change detection of wetland ecosystems using Landsat imagery and change vector analysis. Wetlands 27, 3 (2007), 610.
  • Blacker et al. (2019) P. Blacker et.al. Rapid Prototyping of Deep Learning Models on Radiation Hardened CPUs. In 2019 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE, 25–32.
  • Buonaiuto et al. (2017) N. Buonaiuto et.al. Satellite identification imaging for small satellites using NVIDIA. (2017).
  • Chan and Stott (2019) M. Chan et.al. Deep-CEE I: Fishing for Galaxy Clusters with Deep Neural Nets. arXiv:1906.08784.
  • Chen et al. (2018) L. Chen et.al. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV. 801–818.
  • Chowdhery et al. (2019) A. Chowdhery et.al. Visual Wake Words Dataset. arXiv:1906.05721
  • COBHAM (2018) COBHAM. GR712RC Dual-Core LEON3-FT SPARC V8 Processor Data Sheet. https://www.gaisler.com/doc/gr712rc-datasheet.pdf.
  • Dennehy ([n.d.]) C. Dennehy. A NASA GN&C Viewpoint on On-Board Processing Challenges to Support Optical Navigation and Other GN&C Critical Functions. https://indico.esa.int/event/225/contributions/4249/
  • Drabbe and Drabbe (2008) J. Drabbe et.al. ECSS-E-ST-70-11C – Space segment operability. https://ecss.nl/standard/ecss-e-st-70-11c-space-segment-operability/
  • Esposito (2019) M. Esposito. CloudScout: In Orbit Demonstration of Machine Learning applied on hyperspectral and multispectral thermal imaging. https://indico.esa.int/event/225/timetable/#20190225.detailed
  • Estlin et al. (2012) T. Estlin et.al. Aegis automated science targeting for the mer opportunity rover. ACM TIST 3, 3 (2012), 50.
  • N. Lane et. al. (2017) N. Lane et. al. 2017. Squeezing deep learning into mobile and embedded devices. IEEE Pervasive Computing 16, 3 (2017), 82–88.
  • Evers (2019) N. Evers. Deep learning in Space. https://towardsdatascience.com/deep-learning-in-space-964566f09dcd.
  • Ferraguto et al. (2008) M. Ferraguto et.al. The on-board control procedures subsystem for the Herschel and Planck satellites. In 2008 32nd Annual IEEE International Computer Software and Applications Conference. IEEE, 1366–1371.
  • Ferreira et al. (2019) P. Victor et.al. Reinforcement Learning for Satellite Communications: From LEO to Deep Space Operations. IEEE Communications Magazine 57, 5 (2019), 70–75.
  • Gao et al. (2012) Y. Gao et.al. Modular design for planetary rover autonomous navigation software using ROS. Acta Futura 5 (2012), 9–16.
  • George and Wilson (2018) A. George et.al. Onboard processing with hybrid and reconfigurable computing on small satellites. Proc. IEEE 106, 3 (2018), 458–470.
  • Guirado et al. (2019) E. Guirado et.al. Whale counting in satellite and aerial images with deep learning. Scientific reports 9, 1 (2019), 1–12.
  • ISIS Space (2018) ISIS Space. ISIS High Data Rate S-Band Transmitter. https://www.isispace.nl/product/isis-txs-s-band-transmitter/.
  • Jeppesen et al. (2019) J. Jeppesen et.al. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sensing of Env. 229 (2019), 247–259.
  • Lewis et al. (1997) H. Lewis et.al. Determination of spatial and temporal characteristics as an aid to neural network cloud classification. International Journal of Remote Sensing 18, 4 (1997), 899–915.
  • Liberis and Lane (2019) E. Liberis et.al. Neural networks on microcontrollers: saving memory at inference via operator reordering. arXiv:1910.05110.
  • Lorenz et al. (2017) D. Lorenz et.al. Lessons learned from OSIRIS-Rex autonomous navigation using natural feature tracking. In IEEE Aerospa. Conf. 17. 1–12.
  • Manning et al. (2018) J. Manning et.al.

    Machine-learning space applications on smallsat platforms with tensorflow. In

    Proceedings of the 32nd Annual AIAA/USU Conference on Small Satellites, Logan, UT, USA. 4–9.
  • Moreira (2019) A. Moreira. Synthetic Aperture Radar (SAR): Principles and Applications. https://earth.esa.int/documents/10174/642943/6-LTC2013-SAR-Moreira.pdf.
  • Nanjangud et al. (2018) A. Nanjangud et.al. Robotics and AI-enabled on-orbit operations with future generation of small satellites. Proc. IEEE 106, 3 (2018), 429–439.
  • NASA (2018) NASA. State of the Art Small Spacecraft Technology. https://www.nasa.gov/sites/default/files/atoms/files/soa2018_final_doc.pdf.
  • Notebaert ([n.d.]) O. Notebaert. On-Board Payload Data Processing requirements https://indico.esa.int/event/225/contributions/4298/
  • Ortíz-Gómez et al. ([n.d.]) F. Ortíz-Gómez et.al. On the use of neural networks for flexible payload management in VHTS systems.
  • Qian et al. (2006) S. Qian et.al. Near lossless data compression onboard a hyperspectral satellite. IEEE Trans. Aerospace Electron. Systems 42, 3 (2006), 851–866.
  • Salem et al. (2001) F. Salem et.al. Hyperspectral image analysis for oil spill detection. In Summaries of NASA/JPL Airborne Earth Science Workshop. 5–9.
  • Satellite Imaging Corporation (2019) Satellite Imaging Corporation. WorldView-4 Satellite Sensor. https://www.satimagingcorp.com/satellite-sensors/geoeye-2/.
  • Schartel (2017) A. Schartel. Increasing Spacecraft Autonomy through Embedded Neural Networks for Semantic Image Analysis.
  • Sze (2017) V. Sze et. al. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 12 (2017), 2295–2329.
  • Torelli (2019) F. Torelli. Common DPU and Basic SW for JUICE instruments. https://indico.esa.int/event/225/contributions/3688/
  • Valsesia and Magli (2019) D. Valsesia et.al.

    Image dequantization for hyperspectral lossy compression with convolutional neural networks.

    https://indico.esa.int/event/225/contributions/4254/.
  • Carvalho (2019) D. Carvalho  et. al. 2019. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics 8, 8 (2019), 832.
  • Dutta (Dutta) S. Dutta  et. al. Sherlock: A Tool for Verification of Deep Neural Networks.
  • Gretok (2019) E. W et. al. Gretok. 2019. Comparative Benchmarking Analysis of Next-Generation Space Processors. In 2019 IEEE Aerospace Conference.
  • Mohajerani and Saeedi (2019) S. Mohajerani and P. Saeedi. 2019. Cloud-Net: An End-To-End Cloud Detection Algorithm for Landsat 8 Imagery. In IGARSS 2019.
  • Paoletti (2019) ME Paoletti  et. al. 2019.

    Deep learning classifiers for hyperspectral imaging: A review.

    ISPRS Journal of Photogrammetry and Remote Sensing 158 (2019), 279–317.
  • Trivedi and Mehta (2016) R. Trivedi and U. S Mehta. 2016. A survey of radiation hardening by design (rhbd) techniques for electronic systems for space application. IJECET 7, 1 (2016), 75.
  • Usgs (Usgs) Usgs . Earth Explorer. https://earthexplorer.usgs.gov/
  • Wang (2018) Y.Wang  et. al. 2018. Towards ultra-high performance and energy efficiency of deep learning systems: an algorithm-hardware co-optimization framework. In 32nd AAAI Conference on AI.
  • Wimmers et al. (2019) A. Wimmers et. al. 2019. Using deep learning to estimate tropical cyclone intensity from satellite passive microwave imagery. Monthly Weather Review 147, 6 (2019).
  • Xiang (2018) W. Xiang  et. al. 2018. Verification for machine learning, autonomy, and neural networks survey. arXiv preprint arXiv:1810.01989 (2018).
  • ECS (2014) ECSS Active Standards. https://ecss.nl/active-standards/