Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability

12/04/2019
by   Benjamin A. Toms, et al.
24

Neural networks have become increasingly prevalent within the geosciences for applications ranging from numerical model parameterizations to the prediction of extreme weather. A common limitation of neural networks has been the lack of methods to interpret what the networks learn and how they make decisions. As such, neural networks have typically been used within the geosciences to accurately identify a desired output given a set of inputs, with the interpretation of what the network learns being used - if used at all - as a secondary metric to ensure the network is making the right decision for the right reason. Network interpretation techniques have become more advanced in recent years, however, and we therefore propose that the ultimate objective of using a neural network can also be the interpretation of what the network has learned rather than the output itself. We show that the interpretation of a neural network can enable the discovery of scientifically meaningful connections within geoscientific data. By training neural networks to use one or more components of the earth system to identify another, interpretation methods can be used to gain scientific insights into how and why the two components are related. In particular, we use two methods for neural network interpretation. These methods project the decision pathways of a network back onto the original input dimensions, and are called "optimal input" and layerwise relevance propagation (LRP). We then show how these interpretation techniques can be used to reliably infer scientifically meaningful information from neural networks by applying them to common climate patterns. These results suggest that combining interpretable neural networks with novel scientific hypotheses will open the door to many new avenues in neural network-related geoscience research.

READ FULL TEXT

page 17

page 19

page 20

page 22

page 23

research
03/21/2019

Interpreting Neural Networks Using Flip Points

Neural networks have been criticized for their lack of easy interpretati...
research
05/06/2020

Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications

Neural networks have opened up many new opportunities to utilize remotel...
research
08/27/2021

VisGraphNet: a complex network interpretation of convolutional neural features

Here we propose and investigate the use of visibility graphs to model th...
research
09/13/2023

Deep Quantum Graph Dreaming: Deciphering Neural Network Insights into Quantum Experiments

Despite their promise to facilitate new scientific discoveries, the opaq...
research
06/06/2018

A Peek Into the Hidden Layers of a Convolutional Neural Network Through a Factorization Lens

Despite their increasing popularity and success in a variety of supervis...
research
01/28/2022

Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods

Not only automation of manufacturing processes but also automation of au...
research
06/09/2010

Using Neural Networks to improve classical Operating System Fingerprinting techniques

We present remote Operating System detection as an inference problem: gi...

Please sign up or login with your details

Forgot password? Click here to reset