Bridging observation, theory and numerical simulation of the ocean using Machine Learning

04/26/2021 ∙ by Maike Sonnewald, et al. ∙ Princeton University 0

Progress within physical oceanography has been concurrent with the increasing sophistication of tools available for its study. The incorporation of machine learning (ML) techniques offers exciting possibilities for advancing the capacity and speed of established methods and also for making substantial and serendipitous discoveries. Beyond vast amounts of complex data ubiquitous in many modern scientific fields, the study of the ocean poses a combination of unique challenges that ML can help address. The observational data available is largely spatially sparse, limited to the surface, and with few time series spanning more than a handful of decades. Important timescales span seconds to millennia, with strong scale interactions and numerical modelling efforts complicated by details such as coastlines. This review covers the current scientific insight offered by applying ML and points to where there is imminent potential. We cover the main three branches of the field: observations, theory, and numerical modelling. Highlighting both challenges and opportunities, we discuss both the historical context and salient ML tools. We focus on the use of ML in situ sampling and satellite observations, and the extent to which ML applications can advance theoretical oceanographic exploration, as well as aid numerical simulations. Applications that are also covered include model error and bias correction and current and potential use within data assimilation. While not without risk, there is great interest in the potential benefits of oceanographic ML applications; this review caters to this interest within the research community.



There are no comments yet.


page 5

page 6

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Oceanography: observations, theory, and numerical simulation

The physics of the oceans have been of crucial importance, curiosity and interest since prehistoric times, and today remain an essential element in our understanding of weather and climate, and as a key driver of biogeochemistry and overall marine resources. The eras of progress within oceanography have gone hand in hand with the tools available for its study. Here, the current progress and potential future role of machine learning (ML) techniques is reviewed and briefly put into historical context. ML adoption is not without risk, but is here put forward as having the potential to again accelerate scientific insight, performing tasks better and faster, along with allowing avenues of serendipitous discovery.

Perhaps the principal interest in oceanography was originally that of navigation, for exploration, commercial and military purposes. Knowledge of the ocean as a dynamical entity with predictable features– the regularity of its currents and tides – must have been known for millennia: Knowledge of oceanography likely helped the successful colonization of Oceania [165], and similarly Viking and Inuit navigation [110], the oldest known dock was constructed in Lothal with knowledge of the tides dating back to 2500–1500 CE[48], and Abu Ma’shar of Baghdad in the 8th century CE correctly attributed the existence of tides to the Moon’s pull.

The ocean measurement era, determining temperature and salinity at depth from ships, starts in the late 18th century CE. While the tools for a theory of the ocean circulations started to become available in the early 19th century CE with the Navier-Stokes equation, observations remained at the core of oceanographic discovery. The first modern oceanographic textbook was published in 1855 by M. Mauri, whose work in oceanography and politics served the slave trade across the Atlantic, around the same time COs’ role in climate was recognized [90, 223]. The first major global observational synthesis of the ocean can be traced to the Challenger expeditions of 1873-75 CE [64], which gave a first look at the global distribution of temperature and salinity including at depth, revealing the 3-dimensional structure of the ocean.

Quantifying the time mean ocean circulation remains challenging, as ocean circulation features strong local and instantaneous fluctuations. Improvements in measurement techniques allowed the Swedish oceanographer Ekman to elucidated the nature of the wind-driven boundary layer [81]. Ekman used observations taken while he was intentionally frozen into the Arctic ice on the Fram expedition with the Norwegian oceanographer and explorer Nansen. The “dynamic method” was introduced by Swedish oceanographer Sandström and the Norwegian oceanographer Helland-Hansen [198]

, allowing the indirect computation of ocean currents from density estimates under the assumption of a largely laminar flow. This theory was developed further by Norwegian meteorologist Bjerknes into the concept of

geostrophy, from the Greek geo for earth and strophe for turning. This theory was put to the test in the extensive Meteor expedition in the Atlantic from 1925-27 CE; they uncovered a view of the horizontal and vertical ocean structure and circulation that is strikingly similar to our present view of the Atlantic meridional overturning circulation [163, 191].

While the origins of Geophysical Fluid Dynamics (GFD) can be traced back to Laplace or Archimedes, the era of modern GFD can be seen to stem from linearizing the Navier-Stokes equations, which enabled progress in understanding meteorology and atmospheric circulation. For the ocean, pioneering dynamisists include Sverdrup, Stommel, and Munk, whose theoretical work still has relevance today [209, 167]. As compared to the atmosphere, the ocean circulation exhibits variability over a much larger range of timescales, as noted by [168], likely spanning thousands of years rather than the few decades of detailed ocean observations available at the time.Yet there are phenomena at intermediate timescales (that is, months to years) which seemed to involve both atmosphere and ocean, e.g [171], and indeed Sverdrup suggests the importance of the coupled atmosphere-ocean system in [211]. In the 1940s much progress within GFD was also driven by the second world war (WWII). The introduction of accurate navigation through radar introduced with WWII worked a revolution for observational oceanography together with bathythermographs intensively used for submarine detection. Beyond in situ observations, the launch of Sputnik, the first artificial satellite, in 1957 heralded the era of ocean observations from satellites. Seasat, launched on the 27th of June 1978, was the first satellite dedicated to ocean observation.

Oceanography remains a subject that must be understood with an appreciation of available tools, both observational and theoretical, but also numerical. While numerical GFD can be traced back to the early 1900s [2, 29, 190]

, it became practical with the advent of numerical computing in the late 1940s, complementing that of the elegant deduction and more heuristic methods that one could call “pattern recognition” that had prevailed before

[10]. The first ocean general circulation model with specified global geometry were developed by Bryan and Cox [44, 43] using finite-difference methods. This work paved the way for a what now is a major component of contemporary oceanography. The first coupled ocean-atmosphere model of [153] eventually led to their use for studies of the coupled Earth system, including its changing climate. The low-power integrated circuit that gave rise to computers in the 1970s also revolutionized observational oceanography, enabling instruments to reliably record autonomously. This has enabled instruments such as moored current meters and profilers, drifters, and floats through to hydrographic and velocity profiling devises that gave rise to microstructure measurements. Of note is the fleet of free-drifting Argo floats, beginning in 2002, which give an extraordinary global dataset of profiles [193]. Data assimilation (DA) is the important branch of modern oceanography combining what is often sparse observational data with either numerical or statistical ocean models to produce observationally-constrained estimates with no gaps. Such estimates are referred to as an ’ocean state’, which is especially important for understanding locations and times with no available observations.

Together the innovations within observations, theory, and numerical models have produced distinctly different pictures of the ocean as a dynamical system, revealing it as an intrinsically turbulent and topographically influenced circulation [239, 94]. Key large scale features of the circulation depend on very small scale phenomena, which for a typical model resolution remain parameterized rather than explicitly calculated. For instance, fully accounting for the subtropical wind-driven gyre circulation and associated western boundary currents relies on an understanding of the vertical transport of vorticity input by the wind and output at the sea floor, which is intimately linked to mesoscale (ca. 100km) flow interactions with topography [123, 79]. It has become apparent that localized small-scale turbulence (0-100km) can also impact the larger-scale, time-mean overturning and lateral circulation by affecting how the upper ocean interacts with the atmosphere [219, 89, 115]. The prominent role of the small scales on the large scale circulation has important implications for understanding the ocean in a climate context, and its representation still hinges on the further development of our fundamental understanding, observational capacity, and advances in numerical approaches.

The development of both modern oceanography and ML techniques have happened concurrently, as illustrated in Fig. 1. This review summarizes the current state of the art in ML applications for physical oceanography and points towards exciting future avenues. We wish to highlight certain areas where the emerging techniques emanating from the domain of ML demonstrate potential to be transformative. ML methods are also being used in closely-related fields such as atmospheric science. However, within oceanography one is faced with a unique set of challenges rooted in the lack of long-term and spatially dense data coverage. While in recent years the surface of the ocean is becoming well observed, there is still a considerable problem due to sparse data, particularly in the deep ocean. Temporally, the ocean operates on timescales from seconds to millennia, and very few long term time series exist. There is also considerable scale-interaction.

There remains a healthy skepticism towards some ML applications, and calls for “trustworthy” ML are also coming forth from both the European Union and the United States government (Assessment List for Trustworthy Artificial Intelligence [ALTAI], and mandate E.O. 13960 of Dec 3, 2020). Within the physical sciences and beyond, trust can be fostered through transparency. For ML, this means moving beyond the “black box” approach for certain applications. Moving away from this black box approach and adopting a more transparent approach involves gaining insight into the learned mechanisms that gave rise to ML predictive skill. This is facilitated by either building a priori

interpretable ML applications or by retrospectively explaining the source of predictive skill, coined interpretable and explainable artificial intelligence (IAI and XAI, respectively [195, 124]). With such insights from transparent ML, a synthesis between theoretical and observational branches of oceanography could be possible. Traditionally, theoretical models tend towards oversimplification, while data can be overwhelmingly complicated. For advancement in the fundamental understanding of ocean physics, ML is ideally placed to identify salient features in the data that are comprehensible to the human brain. With this approach, ML could significantly facilitate a generalization beyond the limits of data, letting data reveal possible structural errors in theory. With such insight, a hierarchy of conceptual models of ocean structure and circulation could be developed, signifying an important advance in our understanding of the ocean.

In this review, we introduce ML concepts (Section 1.2), and some of its current roles in the atmospheric and Earth System Sciences (Section 1.3), highlighting particular areas of note for ocean applications. The review follows the structure outline illustrated in Fig. 2, with the ample overlap noted through cross referencing the text. We review ocean observations (Section 2), sparsely observed for much history, but now yielding increasingly clear insight into the ocean and its 3D structure. In Section 3 we examine a potential synergy between ML and theory, with the intent to distill expressions of theoretical understanding by dataset analysis from both numerical and observational efforts. We then progress from theory to models, and the encoding of theory and observations in numerical models (Section 4). We highlight some issues involved with ML-based prediction efforts (Section 5), and end with a discussion of challenges and opportunities for ML in the ocean sciences (Section 6). These challenges and opportunities include the need for transparent ML, ways to support decision makers and a general outlook. Appendix 1 has a list of acronyms.

Figure 1: Timeline sketch of oceanography (blue) and ML (orange). The timelines of oceanography and ML are moving towards each other, and interactions between the fields where ML tool as are incorporated into oceanography has the potential to accelerate discovery in the future. Distinct ‘events’ marked in grey. Each field has gone through stages (black), with progress that can be attributed to the available tools. With the advent of computing, the fields were moving closer together in the sense that ML methods generally are more directly applicable. Modern ML is seeing an very fast increase in innovation, with much potential for adoption by oceanographers. See table 1 for acronyms.
Figure 2: Machine learning within the components of oceanography. A diagram capturing the general flow of knowledge, highlighting the components covered in this review. Separating the categories (arrows) is artificial, with ubiquitous feed-backs between most components, but serves as an illustration.

1.2 Concepts in ML

Throughout this article, we will mention some concepts from the ML literature. We find it then natural to start this paper with a brief introduction to some of the main ideas that shaped the field of ML.

ML, a sub-domain of Artificial Intelligence (AI), is the science of providing mathematical algorithms and computational tools to machines, allowing them to perform selected tasks by “learning” from data. This field has undergone a series of impressive breakthroughs over the last years thanks to the increasing availability of data and the recent developments in computational and data storage capabilities. Several classes of algorithms are associated with the different applications of ML. They can be categorized into three main classes: supervised learning, unsupervised learning, and reinforcement learning (RL). In this review, we focus on the first two classes which are the most commonly used to date in the ocean sciences.

1.2.1 Supervised learning

Supervised learning refers to the task of inferring a relationship between a set of inputs and their corresponding outputs. In order to establish this relationship, a “labeled” dataset is used to constrain the learning process and assess the performance of the ML algorithm. Given a dataset of pairs of input-output training examples

and a loss function

that represents the discrepancy between the ML model prediction and the actual outputs, the parameters of the ML model are found by solving the following optimization problem:


If the loss function is differentiable, then gradient descent based algorithms can be used to solve equation 1. These methods rely on an iterative tuning of the models’ parameters in the direction of the negative gradient of the loss function. At each iteration , the parameters are updated as follows:


where is the rate associated with the descent and is called the learning rate and the gradient operator.

Two important applications of supervised learning are regression and classification. Popular statistical techniques such as Least Squares or Ridge Regression, which have been around for a long time, are special cases of a popular supervised learning technique called Linear Regression (in a sense, we may consider a large number of oceanographers to be early ML practitioners.) For regression problems, we aim to infer continuous outputs and usually use the mean squared error (MSE) or the mean absolute error (MAE) to assess the performance of the regression. In contrast, for supervised classification problems we sort the inputs to a number of classes or categories that have been pre-defined. In practice, we often transform the categories into probability values of belonging to some class and use distribution-based distances such as the cross-entropy to evaluate the performance of the classification algorithm.

Numerous types of supervised ML algorithms have been used in the context of ocean research, as detailed in the following sections. Notable methods include:

  • Linear univariate (or multivariate) regression (LR), where the output is a linear combination of some explanatory input variables. LR is one of the first ML algorithms to be studied extensively and used for its ease of optimization and its simple statistical properties [166].

  • k-Nearest Neighbors (KNN)

    , where we consider an input vector, find its

    closest points with regard to a specified metric, then classify it by a plurality vote of these

    points. For regression, we usually take the average of the values of the

    neighbors. KNN is also known as “analog methods” in the numerical weather prediction community


  • Support Vector Machines (SVM) [57]

    , where the classification is done by finding a linear separating hyperplane with the maximal margin between two classes (the term “margin” here denotes the space between the hyperplane and the nearest points in either class.) In case of data which cannot be separated linearly, the use of the

    kernel trick projects the data into a higher dimension where the linear separation can be done. Support Vector Regression (SVR) are an adaption of SVMs for regression problems.

  • Random Forests (RF)

    that are a composition of a multitude of Decision Trees (DT). DTs are constructed as a tree-like composition of simple decision rules


  • Gaussian Process Regression (GPR) [237]

    , also called kriging, is a general form of to the optimal interpolation algorithm, which has been used in the oceanographic community for a number of years

  • Neural Networks (NN)

    , a powerful class of universal approximators that are based on compositions of interconnected nodes applying geometric transformations (called affine transformations) to inputs and a nonlinearity function called an “activation function


The recent ML revolution, i.e. the so-called Deep Learning (DL) era that began in the early 2010s, sparked off thanks to the scientific and engineering breakthroughs in training neural networks (NN), combined with the proliferation of data sources and the increasing computational power and storage capacities. The simplest example of this advancement is the efficient use of the algorithm of backpropagation (known in the geocience community as the adjoint method) combined with stochastic gradient descent for the training of multi-layer NNs, i.e. NNs with multiple layers, where each layer takes the result of the previous layer as an input, applies the mathematical transformations and then yields an input for the next layer


. DL research is a field receiving intense focus and fast progress through its use both commercially and scientifically, resulting in new types of ”architectures” of NNs, each adapted to particular classes of data (text, images, time series, etc.). Multilayer perceptrons (MLP) for tabular data, Convolutional NNs (ConvNet) for imagery and Recurrent NNs (RNN) for temporal data are ubiquitous in the recent ML literature

[200, 142].

1.2.2 Unsupervised learning

Unsupervised learning is another major class of ML. In these applications, the datasets are typically unlabelled. The goal is then to discover patterns in the data that can be used to solve particular problems. One way to say this is that unsupervised classification algorithms identify sub-populations in data distributions, allowing users to identify structures and potential relationships among a set of inputs (which are sometimes called “features” in ML language). Unsupervised learning is somewhat closer to what humans expect from an intelligent algorithm, as it aims to identify latent representations in the structure of the data while filtering out unstructured noise. At the NeurIPS 2016 conference, Yann LeCun, a DL pioneer researcher, highlighted the importance of unsupervised learning using his cake analogy: ”If machine learning is a cake, then unsupervised learning is the actual cake, supervised learning is the icing, and RL is the cherry on the top.”

Unsupervised learning is achieving considerable success in both clustering and dimensionality reduction applications. Some of the unsupervised techniques that are mentioned throughout this review are:

  • k-means

    , a popular and simple space-partitioning clustering algorithm that finds classes in a dataset by minimizing within-cluster variances


    . Gaussian Mixture Models (GMMs) can be seen as a generalization of the k-means algorithm that assumes the data can be represented by a mixture (i.e. linear combination) of a number of multi-dimensional Gaussian distributions


  • Kohonen maps [also called Self Organizing Maps (SOM)] is a NN based clustering algorithm that leverages topology of the data; nearby locations in a learned map are placed in the same class [136]. K-means can be seen as a special case of SOM with no information about the neighborhood of clusters.

  • t-SNE and UMAP

    are two other clustering algorithms which are often used for not only finding clusters but also because of their attractive data visualization properties

    [225, 161]. These methods are useful for representing the structure of a high-dimensional dataset in a small number of dimensions that can be plotted. They use a measure of the distance between points.

  • Principal Component Analysis (PCA) [176], the simplest and most popular dimensionality reduction algorithm. Another term for PCA is Empirical Orthogonal Function analysis (EOF), which has been used by physical oceanographers for many years.

  • Autoencoders

    (AE) are NN-based dimensionality reduction algorithms, consisting of a bottleneck-like architecture that learns to reconstruct the input by minimzing the error between the output and the input which are the same data (i.e. the same data is fed into both the input and output of the autoencoder). A central layer with a lower dimension than the original inputs’ dimension is called a “code” and represents a compressed representation of the input


  • Generative modeling

    : a powerful paradigm that learns the latent features and distributions of a dataset and then proceeds to generate new samples that are plausible enough to belong to the initial dataset. Variational Auto-encoders (VAEs) and Generative Adversarial Networks (GANS) are two popular techniques of generative modeling that benefited much from the DL revolution

    [133, 103].

Between supervised and unsupervised learning lies semi-supervised learning. It is a special case where one has access to both labeled and unlabeled data. A classical example is when labeling is expensive, leading to a small percentage of labeled data and a high percentage of unlabeled data.

Reinforcement learning is the third paradigm of ML; it is based on the idea of creating algorithms where an agent explores an environment with the aim of reaching some goal. The agent learns through a trial an error mechanism, where it performs an action and receives a response (reward or punishment), the agent learns by maximizing the expected sum of rewards [215]. The DL revolution did also affect this field and led to the creation of a new field called deep reinforcement learning (Deep RL) [210]. A popular example of Deep RL that got huge media attention is the algorithm AlphaGo developed by DeepMind which beat human champions in the game of Go [204].

The goal of this review paper is not to delve into the definitions of ML techniques but only to briefly introduce them to the reader and recommend references for further investigation. The textbook by Christopher Bishop [28] covers essentials of the fields of pattern recognition and ML. William Hsieh’s book [121] is probably one of earliest attempts at writing a comprehensive review of ML methods targeted at earth scientists. Another notable review of statistical methods for physical oceanography is the paper by Wikle et al. [235]. We also refer the interested reader to the book of Goodfellow et al. [24] to learn more about the theoretical foundations of DL and some of its applications in science and engineering.

1.3 ML in atmospheric and the wider Earth system sciences

Precursors to modern ML methods, such as regression and principal component analysis, have of course been used in many fields of Earth system science for decades. The use of PCA for example, was popularized in meteorology by [149], as a method of dimensionality reduction of large geospatial datasets, and Lorenz also speculates here on the possibility of purely statistical methods of long-term weather prediction based on a representation of data using PCA. Methods of discovering correlations and links, including possible causal links, between dataset features using formal methods have also been much in use in Earth system science. e.g [17]. Walker [231], tasked with discovering the cause for the interannual fluctuation of the Indian monsoon, (whose failure meant widespread drought in India, and in colonial times, famine as well, [63]), Walker put to work an army of Indian clerks to carry out a vast computation by hand to discover possible correlations across all available data. This led to the discovery of the Southern Oscillation, the seesaw in the West-East temperature gradient in the Pacific, which we know now by its modern name, El Niño Southern Oscillation (ENSO). Beyond observed correlations, theories of ENSO and its emergence from coupled atmosphere-ocean dynamics appeared decades later [244]. Walker speaks of statistical methods of discovering “weather connections in distant parts of the earth”, or teleconnections. The ENSO-monsoon teleconnection remains a key element in diagnosis and prediction of the Indian monsoon [214], [213]. These and other data-driven methods of the pre-ML era are surveyed in [41]. ML-based predictive methods targeted at ENSO are also being established [111]. Here, the learning is not directly from observations but from models and reanalysis data, and outperform some dynamical models in forecasting ENSO.

There is an interplay between data-driven methods and physics-driven methods that both strive to create insight into many complex systems, where the ocean and the wider Earth system science are examples. As an example of physics-driven methods [10], Bjerknes and other pioneers discussed in Section 1.1 formulated accurate theories of the general circulation, that were put into practice for forecasting with the advent of digital computing. Advances in numerical methods led to the first practical physics-based atmospheric forecast [182]. Until that time, forecasting often used data-driven methods “that were neither algorithmic nor based on the laws of physics” [172]. ML offers avenues to a synthesis of data-driven and physics-driven methods. In recent years, as outlined below in Section 4.2, new processors and architectures within computing have allowed much progress within forecasting and numerical modelling overall. ML methods are poised to allow Earth system science modellers to increase the efficient use of modern hardware even further. It should be noted however that “classical” methods of forecasting such as analogues also have become more computationally feasible, and demonstrate equivalent skill, e.g [68]. The search for analogues has become more computationally tractable as well, although there may be limits here as well [70].

Advances in numerical modeling brought in additional understanding of elements in Earth system science which are difficult to derive, or represent from first principles. Examples include cloud microphysics or interactions with the land surface and biosphere. For capturing cloud processes within models, the actual processes governing clouds takes place at scales too fine to model and will remain out of reach of computing for the foreseeable future [201]. A practical solution to this is finding a representation of the aggregate behavior of clouds at the resolution of a model grid cell. This has proved quite difficult and progress over many decades has been halting [35]. The use of ML in deriving representations of clouds is now an entire field of its own. Early results include the results of [98], using NNs to emulate a “super-parameterized” model. In the super-parameterized model, there is a clear (albeit artificial) separation of scales between the “cloud scale” and the large scale flow. When this scale separation assumption is relaxed, some of the stability problems associated with ML re-emerge [40]. There are also a fundamental issue of whether learned relationships respect basic physical constraints, such as conservation laws [147]. Recent advances ([241], [25]) focus on formulating the problem in a basis where invariances are automatically maintained. But this still remains a challenge in cases where the physics is not fully understood.

There are at least two major efforts for the systematic use of ML methods to constrain the cloud model representations in GCMs. First, the calibrate-emulate-sample (CES [54, 75]) approach uses a forward model for a broad calibration of parameters also referred to as “tuning”[119]. This is followed by an emulator (as the forward model is computationally expensive) for further calibration and uncertainty quantification. It is important to retain the uncertainty quantification aspect in the ML context as it is likely that the data in a chaotic system only imperfectly constrain the loss function. Second, emulators can be used to eliminate implausible parameters from a calibration process, demonstrated by the HighTune project [59, 120]. This process can also identify “structural error”, indicating that the model formulation itself is incorrect, when no parameter choices can yield a plausible solution. Model errors are discussed in Section 5.1. In an ocean context, the methods discussed here can be a challenge due to the necessary forwards model component. Note also, that ML algorithms such as GPR are ubiquitous in emulating problems thanks to their built-in uncertainty quantification. GPR methods are also popular because their application involves a low number of training samples, and function as inexpensive substitutes for a forward model.

Model resolution that is inadequate for many practical purposes has led to the development of data-driven methods of “downscaling”. For example climate change adaptation decision-making at the local level based on climate simulations too coarse to feature enough detail. Most often, a coarse-resolution model output is mapped onto a high-resolution reference truth, for example given by observations [226, 4]. Empirical-statistical downscaling (ESD, [23]) is an example of such methods. While ESD emphasized the downscaling aspect, all of these downscaling methods include a substantial element of bias correction. This is highlighted in the name of some of the popular methods such as Bias Correction and Spatial Downscaling [238] and Bias Corrected Constructed Analogue [157]. These are trend‐preserving statistical downscaling algorithms, that combine bias correction with the analogue method of Lorenz (1969)[151]. ML methods are rapidly coming to dominate the field as discussed in Section 5.1, with examples ranging from precipitation (e.g [227]) to unresolved river transport [101]. Downscaling methods continue to make the assumption that transfer functions learned from present-day climate continue to hold in the future. This stationarity assumption is a potential weakness of data-driven methods ([177, 69]), that requires a synthesis of data-driven and physics-based methods as well.

2 Ocean observations

Observations continue to be key to oceanographic progress, with ML increasingly being recognised as a tool that can enable and enhance what can be learned from observational data, performing conventional tasks better/faster, as well as bring together different forms of observations, facilitating comparison with model results. ML offers many exciting opportunities for use with observations, some of which are covered in this section and in section 5 as supporting predictions and decision support.

The onset of the satellite observation era brought with it the availability of a large volume of effectively global data, challenging the research community to use and analyze this unprecedented data stream. Applications of ML intended to develop more accurate satellite-driven products go back to the 90’s [218]. These early developments were driven by the data availability, distributed in normative format by the space agencies, and also by the fact that models describing the data were either empirical (e.g. marine biogeochemistry [199]) or too computationally costly and complex (e.g. radiative transfer [132]). More recently, ML algorithms have been used to fuse several satellite products [107] and also satellite and in-situ data [170, 49, 156, 131]. For the processing of satellite data, ML has proven to be a valuable tool for extracting geophysical information from remotely sensed data (e.g. [76]), whereas a risk of using only conventional tools is to exploit only a more limited subset of the mass of data available. These applications are based mostly on instantaneous or very short-term relationships and do not address the problem of how these products can be used to improve our ability to understand and forecast the oceanic system. Further use for current reconstruction using ML [155], heat fluxes [99], the 3-dimensional circulation[206], and ocean heat content[125] are also being explored.

There is also an increasingly rich body of literature mining ocean in-situ observations. These leverage a range of data, including Argo data, to study a range of ocean phenomena. Examples include assessing North Atlantic mixed layers [158], describing spatial variability in the Southern Ocean [128], detecting El Nino events [118], assessing how North Atlantic circulation shifts impacting heat content [66], and finding mixing hot spots [194].

Modern in-situ classification efforts are often property-driven, carrying on long traditions within physical oceanography. For example, characteristic groups or “clusters” of salinity, temperature, density or potential vorticity have typically been used to delineate important water masses and to assess their spatial extent, movement, and mixing [117, 112]. However, conventional identification/classification techniques assume that these properties stay fixed over time. The techniques largely do not take interannual and longer timescale variability into account. The prescribed ranges used to define water masses are often somewhat ad-hoc and specific (e.g. mode waters are often tied to very restrictive density ranges) and do not generalize well between basins or across longer timescales [8]. Although conventional identification/classification techniques will continue to be useful well into the future, unsupervised ML offers a robust, alternative approach for objectively identifying structures in oceanographic observations [128, 194, 180, 31].

To analyze data, dimensionality and noise reduction methods have a long history within oceanography. PCA is one such method, which has had a profound influence on oceanography since Lorenz first introduced it to the geosciences in 1956 [149]. Despite the method’s shortcomings related to strong statistical assumptions and misleading applications, it remains a popular approach [164]. PCA can be seen as a super sparse rendition of k-means clustering [67]

with the assumption of an underlying normal distribution in its commonly used form. Overall, different forms of ML can offer excellent advantages over more commonly used techniques. For example, many clustering algorithms can be used to reduce dimensionality according to how many significant clusters are identifiable in the data. In fact, unsupervised ML can sidestep statistical assumptions entirely, for example by employing density-based methods such as DBSCAN

[205]. Advances within ML are making it increasingly possible and convenient to to take advantage of methods such as t-SNE [205] and UMAP, where the original topology of the data can be conserved in a low-dimensional rendition.

Interpolation of missing data in oceanic fields is another application where ML techniques have been used, yielding products used in operational contexts. For example, Kriging is a popular technique that was successfully applied to altimetry [141], as it can account for observation from multiple satellites with different spatio-temporal sampling. EOF-based techniques are also attracting increasing attention with the proliferation of data. For example, the DINEOF algorithm [6] leverages the availability of historical datasets, to fill in spatial gaps within new observations. This is done via projection onto the space spanned by dominant EOFs of the historical data. The use of advanced supervised learning, such as DL, for this problem in an oceanographical contexts is still in its infancy. Attempts exist in the literature, such as the use of NN for regression to reconstruct pCO data [65], or deriving a DL equivalent of DINEOF for interpolating SST [18].

Figure 3: Cartoon of the role of data within oceanography. While eliminating prior assumptions within data analysis is not possible, or even desirable, ML applications can enhance the ability to perform pure data exploration. The ’top down’ approach (left) refers to a more traditional approach where the exploration of the data is firmly grounded in prior knowledge and assumptions. Using ML, how data is used in oceanographic research and beyond can be changed by taking a ’bottom up’ data-exploration centered approach, allowing the possibility for serendipitous discovery.

3 Exchanges between observations and theory

Progress within observations, modeling, and theory go hand in hand, and ML offers a novel method for bridging the gaps between the branches of oceanography. When describing the ocean, theoretical descriptions of circulation tend to be oversimplified, but interpreting basic physics from numerical simulations or observations alone is prohibitively difficult. Progress in theoretical work has often come from the discovery or inference of regions where terms in an equation may be negligible, allowing theoretical developments to be focused with the hope of observational verification. Indeed, progress in identifying negligible terms in fluid dynamics could be said to underpin GFD as a whole [224]. For example, Sverdrup’s theory [212] of ocean regions where the wind stress curl is balanced by the Coriolis term inspired a search for a predicted ‘level of no motion’ within the ocean interior.

The conceptual and numerical models that underlie modern oceanography would be less valuable if not backed by observational evidence, and similarly, findings in data from both observations and numerical models can reshape theoretical models [94]. ML algorithms are becoming heavily used to determine patterns and structures in the increasing volumes of observational and modelled data [158, 128, 129, 194, 217, 207, 45, 118, 180, 31, 66]. For example, ML is poised to help the research community reframe the concept of ocean fronts in ways that are tailored to specific domains instead of ways that are tied to somewhat ad-hoc and overgeneralized property definitions [51]. Broadly speaking, this area of work largely utilizes unsupervised ML and is thus well-positioned to discover underlying structures and patterns in data that can help identify negligible terms or improve a conceptual model that was previously empirical. In this sense, ML methods are well-placed to help guide and reshape established theoretical treatments, for example by highlighting overlooked features. A historical analogy can be drawn to d’Alembert’s paradox from 1752 (or the hydrodynamic paradox), where the drag force is zero on a body moving with constant velocity relative to the fluid. Observations demonstrated that there should be a drag force, but the paradox remained unsolved until Prandtl’s 1904 discovery of a thin boundary layers that remained as a result of viscous forces. Prandtl’s discovery offered credibility to fluid mechanics as a science, and ML is ideally posed to make similar discoveries possible through its ability to objectively analyze the increasingly large and complicated data available.

With an exploration of a dataset that moves beyond preconceived notions comes the potential for making entirely new discoveries. It can been argued that much of the progress within physical oceanography has been rooted in generalizations of ideas put forward over 30 years ago[94, 169, 127]. This foundation can be tested using data to gain insight in a “top-down” manner (Fig. 3). ML presents a possible opportunity for serendipitous discovery outside of this framework, effectively using data as the foundation and achieving insight purely through its objective analysis in a “bottom up” fashion. This can also be achieved using conventional methods but is significantly facilitated by ML, as modern data in its often complicated, high dimensional, and voluminous form complicates objective analysis. ML, through its ability to let structures within data emerge, allows the structures to be systematically analyzed. Such structures can emerge as regions of coherent covariance (e.g. using clustering algorithms from unsupervised ML), even in the presence of highly non-linear and intricate covariance [205]. Such structures can then be investigated in their own right and may potentially form the basis of new theories. Such exploration is facilitated by using an ML approach in combination with interpretable/explainable (IAI/XAI) methods as appropriate. Unsupervised ML lends itself more readily to IAI and to many works discussed above. Objective analysis that can be understood as IAI can also be applied to explore theoretical branches of oceanography, revealing novel structures [45, 207, 217]. Examples where ML and theoretical exploration have been used in synergy by allowing interpretability, explainability, or both within oceanography include [206, 243], and the concepts are discussed further in sections 6.2 and 6.

As an increasingly operational endeavour, physical oceanography faces pressures apart from fundamental understanding due to the increasing complexity associated with enhanced resolution or the complicated nature of data from both observations and numerical models. For advancement in the fundamental understanding of ocean physics, ML is ideally placed to break this data down to let salient features emerge that are comprehensible to the human brain.

3.0.1 ML and hierarchical statistical modeling

The concept of a model hierarchy is described by Held (2005) as a way to fill the “gap” between modelling and understanding[116]

within the Earth system. A hierarchy consists of a set of models spanning a range of complexities. One can potentially gain insights by examining how the system changes when moving between levels of the hierarchy, i.e. when various sources of complexity are added or subtracted, such as new physical processes, smaller-scale features, or degrees of freedom in a statistical description. The hierarchical approach can help sharpen hypotheses about the oceanographic system and inspire new insights. While perhaps conceptually simple, the practical application of a model hierarchy is non-trivial, usually requiring expert judgement and creativity. ML may provide some guidance here, for example by drawing attention to latent structures in the data. In this review, we distinguish between statistical and numerical ML models used for this purpose. The models discussed in Sections 

2 and 3 constitute largely statistical models, such as ones constructed using a k-means application, GANs, or otherwise. This section discusses the concept of hierarchical models in a statistical sense, and Section 4.1.1 explores the concept of numerical hierarchical models. A hierarchical statistical model can be described as a series of model descriptions of the same system from very low complexity (e.g. a simple linear regression) to arbitrarily high. In theory, any statistical model constructed with any data from the ocean could constitute a part of this hierarchy, but here we will restrict our discussion to models constructed from the same or very similar data. This concept of exploring a hierarchy of models, either statistical or otherwise, using data could also be expressed as searching for the “underlying manifold” or “attractor” within the data[148].

In oceanographic ML applications, there are tunable parameters that are often only weakly constrained. A particular example is the total number of classes in unsupervised classification problems [158, 128, 129, 207, 205]. Although one can estimate the optimal value for the statistical model, for example by using metrics that reward increased likelihood and penalize overfitting [e.g. the Bayesian information criteria (BIC) or the Akaike information criterion (AIC)], in practice it is rare to find a clear value of in oceanographic applications. Often, tests like BIC or AIC return either a range of possible values, or they only indicate a lower bound for . This is perhaps because oceanographic data is highly correlated across many different spatial and temporal scales, making the task of separating the data into clear sub-populations a challenging one. That being said, the parameter can also be interpreted as the complexity of the statistical model. A model with a smaller value of will potentially be easier to interpret because it only captures the dominant sub-populations in the data distribution. In contrast, a model with a larger value of will likely be harder to interpret because it captures more subtle features in the data distribution. For example, when applied to Southern Ocean temperature profile data, a simple two-class profile classification model will tend to separate the profiles into those north and south of the Antarctic Circumpolar Current, which is a well-understood approximate boundary between polar and subtropical waters. By contrast, more complex models capture more structure but are harder to interpret using our current conceptual understanding of ocean structure and dynamics [128]. In this way, a collection of statistical models with different values of constitutes a model hierarchy, in which one builds understanding by observing how the representation of the system changes when sources of complexity are added or subtracted [116]. Note that for the example of k-means, while a range of values may be reasonable, this does not largely refer to merely adjusting the value of and re-interpreting the result. This is because, for example, if one moves from =2 to

=3 using k-means, there is no a priori reason to assume they would both give physically meaningful results. What is meant instead is similar to the type of hierarchical clustering that is able to identify different sub-groups and organize them into larger overarching groups according to how similar they are to one another. This is a distinct approach within ML that relies on the ability to measure a “distance” between data points. This rationale reinforces the view that ML can be used to build our conceptual understanding of physical systems, and does not need to be used simply as a “black box”. It is worth noting that the axiom that is being relied on here is that there exists an underlying system that the ML application can approximate using the available data. With incomplete and messy data, the tools available to assess the fit of a statistical model only provide an estimate of how wrong it is certain to be. To create a statistically rigorous hierarchy, not only does the overall co-variance structure/topology need to be approximated, but also the finer structures that would be found within these overarching structures. If this identification process is successful, then the structures can be grouped with accuracy as defined by statistical significance. This can pose a formidable challenge that ML in isolation cannot address; it requires guidance from domain experts. For example, within ocean ecology,

[205] derived a hierarchical model by grouping identified clusters according to ecological similarity. In physical oceanography. [194] grouped some identified classes together into zones using established oceanographic knowledge, in a step from a more complex statistical model to a more simplified one that is easier to interpret. When performing such groupings, one has to pay attention to a balance of statistical rigour and domain knowledge. Discovering rigorous and useful hierarchical models should hypothetically be possible, as demonstrated by the self-similarity found in many natural systems including fluids, but limited and imperfect data greatly complicate the search, meaning that checking for statistical rigour is important.

As a possible future direction, assessing models using IAI and XAI and known physical relationships will likely make finding hierarchical models that are meaningful much easier. Unsupervised ML is more intuitively interpretable than supervised ML and may prove more useful for identifying such hierarchical models. Moving away from using ML in a “black box” sense, with IAI and XAI or otherwise, may yield a synthesis of observations and theory, allowing the field to go beyond the limitations of either; theory may allow one to generalize beyond the limits of data, and data may reveal possible structural errors in theory.

4 From theory to numerical models

The observation of patterns in data is the precursor to a theory, which can lead to predictive models, provided theory can be converted to practical computation. In this section, we discuss how ML could change the way theory is represented within ocean modelling. To represent the ocean using numerical models is to help fill in missing information between observations. In addition, models act as virtual laboratories in which we can work to understand physical relationships (for example how the separation of boundary currents such as the Gulf stream dependents on local topography or boundary conditions.) The focus of this discussion will be on models that represent the three-dimensional ocean circulation, but most of these ideas can also be used in the context of modelling sea-ice, tides, waves, or biogeochemistry. We also discuss a recurring issue within ocean modeling: the presence of coastlines that complicate the application of methods that are convolutional or spectral.

4.1 Timescales and space scales

When building numerical models, the ocean is largely treated like a typical fluid that follows the Navier-Stokes equations, and the challenges faced therein are similar to those presented by general computational fluid dynamics. The filamentation of the flow results in scale interactions that make it necessary to represent all spatial scales within the model, while the model resolution needs to be truncated due to the finite nature of computational power. The dynamics at different scales can either be represented via the explicit, resolved representation within the model or via the parametrization of sub-grid-scales as a turbulent closure.

Much research has gone into the formulation of parametrizations to represent the sub-grid-scales. Such representations range from classical closures for turbulent fluids, using formulations such as Gent-McWilliams [97] that take the dynamics of sub-grid ocean eddies into account, to empirical closure schemes that are determined by comparing simulations at a target resolution to simulations at higher resolution [56, 196]. Lately, ML has also been used to learn the sub-grid-scale, either via the direct learning of the terms using NNs [32] or via the learning of the underlying equations [242]. Similar and promising DA applications are also emerging, discussed in section 5.2.

Next to the representation of the sub-grid-scale, numerical ocean models are also prone to errors due to the necessary discretization of the differential equations on a numerical grid. A number of methods are used to discretize the equations [77], including finite difference, finite volume, and finite element methods. In comparison to the atmosphere, spectral discretization methods cannot easily be applied to the ocean due to the presence of coastlines, as creating a representation using global basis functions is not straightforward.

In the presence of perfect data and adequate computational power with which to train a DL application, it would be theoretically possible to learn the dynamics of the ocean with no knowledge of the equations of motion. This is because DL can learn the update of the physical fields based on time-series of observations or model data. This has been done successfully for certain atmospheric applications [72, 234, 186] and for an idealized ocean model [93]. However, DL representation of the ocean is more difficult than for the atmosphere, as there is much less reliable three-dimensional training data available for the ocean spatially and because the relevant time-scales of the ocean are much longer. This is because the ocean has important low-frequency variability, resulting in a need for longer training data sets. Furthermore, coast lines form lateral boundaries that may be difficult to incorporate into NN solutions, that typically require a certain stencil of local information to update the physical fields at a given gridpoint, such as convolutional networks.

ML tools could also serve as a method to represent the ocean with fewer degrees of freedom than a full conventional numerical model. Such use cases for ML include being used (1) as part of a coupled Earth system model that is either used for short-term weather forecasts, or (2) in long climate simulations. For example, if a model is only trying to represent the surface fields that are most important for the coupling to the atmosphere, the model could focus on the use of the leading principal components (if these can be derived in the presence of coastlines), and learn the interactions between the different components using data from a time-series extracted from long model (or observational) trajectories. However, a first approach towards building low-order ML models using a barotropic model showed that results from DL may not necessarily improve on more classical approaches that combine regression techniques and stochastic forcing [5].

4.1.1 Concepts of ML and hierarchical numerical modeling

This section discusses hierarchical modelling in a numerical sense, complementing Section 3.0.1 that discusses hierarchical modelling in a statistical sense. Within oceanography, observations and theory are more meaningful when viewed together. Observational scientists (see Section 2) make choices of what to sample based on some prior conceptions of relevance, and of course theory is ungrounded without data. In epistemology, this is often summarized in Duhem’s formulation, “theory is data-laden, and data is theory-laden” [74]. In talking about climate and weather modeling, Edwards made the corollary, “models are data-laden, and data is model-laden” [80]. For example, the concept of a reanalysis dataset comes from a model. The sequence from observations to theory to models and predictions shows this interplay. This is a key sequence where we expect ML to display its strengths, e.g. where IAI and XAI methods may yield a synthesis of observations and theory, allowing one to go beyond the limitations of either: theory allowing one to generalize beyond the limits of data, and data revealing possible structural errors in theory as detailed in Section 3. Ideally, we would like to go beyond these and use ML to discover the underlying equations (e.g [42]), and deliver a model hierarchy that can then be implemented numerically ([116],[10]). While simple in principle, in practice this concept is less straightforward to implement. An example of a form of equation discovery can be seen in Zanna and Bolton’s [242] reduction of resolved turbulent dynamics into a representation suitable for use in coarse-grained models. The coarse-grained models represent a different level of a the hierarchy, if tiers are set by horizontal resolution. This ML model was arrived at giving an RVM different equation terms. Within the context of attempting to discover underlying equations, using IAI/XAI can be very powerful, where for the Zanna and Bolton example one can intuitively interpret the results as they are expressed in mathematical terms. Using XAI, it would also be possible to infer what gave the ML application its predictive skill, which could eliminate e.g. contamination from numerical issues that are model resolution specific. Methods constituting equation discovery are an exciting, and potentially powerful, way ML could impact numerical modelling, particularly if IAI/XAI can be applied to ensure the ML application predictive skill is grounded in physics.

4.2 Computational challenges

Since the first ocean general circulation model [44, 43], available computational power has grown exponentially, following Moore’s law. The realization that the ocean is fundamentally turbulent and topographically influenced [239, 94] resulted in numerical model development focused on increasing model complexity and refining the model discretization. Numerical model performance is often measured in simulated years per day (SYPD). Computational challenges largely manifest as a balance between preserving the significant legacy present in current ocean modeling codes and harnessing the significant advances within the field of high performance computing, which is often tailored to ML. ML is a trillion dollar industry which is based on high-performance computing power [53]. It is therefore driving developments in modern supercomputing.

The growth of processing speed in supercomputers is no longer exponential, but improvements in the computational efficiency of ocean models are still possible through customisation of the computing hardware. ML may likely have a place within a revision of ocean models to improve their computational efficiency. Even within Earth system models as a whole, a “digital revolution” has been called for [19], where harnessing efficiency in modern hardware is central. Computers can increasingly be customised as hardware is becoming more heterogeneous, meaning that different components for data movement and processing can be combined [20]

. Examples of such heterogeneous hardware include the so-called Graphical Processing Units (GPU), Tensor Processing Units, Field-Programmable Gate Arrays, and Application Specific Integrates Circuits, which largely are highly compatible with ML. To take advantage of this heterogeneous hardware, making current ocean models “portable”, a significant effort would be necessary

[20]. Current ocean models use the Fortran programming language and are parallelised to run on many processors via interfaces such as MPI and OpenMP. This parallelisation approach is not compatible with hardware accelerators such as GPUs. Compatibility could be achieved via re-writing or enhancement by programming interfaces such as OpenACC or Cuda. Some model groups are investigating a move to newer computing languages, such as Julia (such as the Oceananigans model as part of the CliMA project [185]). Languages like Julia can hide technical details in high-level descriptions of the model code making it more portable. So-called domain-specific languages can be used to facilitate portability [109]. Here, the main algorithmic motives are formulated into library functions that can be ported to different systems with no need to change the model code that is used by the model developer.

ML is expected to play a role in issues associated with the purely computational approach to ocean modelling, beyond devising portability to different hardware accelerators such as GPUs. Hardware accelerators are best suited to problems of high operational intensity (floating-point operations per memory operation). The discretized differential equations governing fluid flow typically result in sparse operations resulting from near-neighbour dependencies (“stencils”). Stencil codes remain notorious for their low operational intensity [14] resulting in poor computational performance, and despite substantial efforts in recent years there has been little progress [173, 11]. This problem is accentuated in oceans, whose long timescales often require O(1000 SYPD) for the basic dynamics to emerge. The role of ML in emulating turbulent ocean dynamics is likely to be critical in achieving the level of performance required. This is because resolving key phenomena such as mesoscale eddies remain computationally out of reach, and the current parameterizations such as from Gent and Mc Willians (1990) [97] discussed in Section 4.1 continue to exhibit deficiencies in simulating meridional eddy transport [92].

ML, and in particular DL, could play a significant role in improve computational efficiency of ocean models due to its ability to work with low numerical precision. Many operations are memory bandwidth bound, and as DL is based on dense linear algebra it is capable of working with very low numerical precision, such as IEEE half precision with 16 bits per variable [139]. The trend towards ML hardware that is optimised for dense linear algebra and low numerical precision may have an impact on future ocean modelling. The use of low numerical precision has been discussed for weather and climate models [73]. The NEMO model [108] was run in single precision with 32 bits per variable instead of the default of double precision with 64 bits per variable [220], and half precision with 16 bits per variable is being explored for weather and climate models [134] and hardware that is customised for ML has been tested to speed-up expensive components of conventional models [113]. However, in particular for the long-term simulations needed in the ocean, care needs to be taken to make sure that rounding errors do not impact on conservation laws. Certain specific aspects of ocean dynamics require a large dynamic range. For instance, sea level rise, which is a secular change measured in cm/century, must be simulated against a backdrop where surface waves have an amplitude measured in (m) and a phase speed of (100 m/s), at least 8 orders of magnitude larger over a typical ocean timestep. It is worth noting, that for subsequent analysis, that using lower numerical precision would also impact the ability of doing analysis on budgets, as closing these would be complicated.

ML is being explored as a method to emulate computationally costly components of ocean models. This was done successfully in a number of studies [188, 40] for physical parametrisation schemes of atmospheric models. For ocean modelling, NN emulators could for example speed-up biogeochemical components [175], which often form a large cost-fraction for ocean models in climate predictions, or sea-ice models, which are often a computational bottleneck as they are difficult to parallelise. ML could also be useful for improving advection schemes and learning local corrections and limiters of fluxes between grid-cells [135]. Furthermore, it may also be possible to improve efficiency of ocean models with semi-implicit timestepping schemes. Here, ML could be used to precondition solver for the large linear problem that needs to be solved in every timestep by estimating the results [3].

The exponential growth of computing power has been accompanied by an exponential growth in data volume. This growth represents a big challenge for operational weather and climate predictions [12]. As data movement is very expensive and a bottleneck in performance, ocean models need to be “data-centric” and the workflow of the model should be designed in a way that is reducing data movement to a minimum. For example, data is conventionally simply written to discs or tapes after a model simulation, to be retrieved by users afterwards for analysis. A data-centric workflow would process data on-the-fly before it is stored. ML, and in particular unsupervised ML, would be essential in enabling domain scientists to extract the relevant information in such a data-centric workflow. However, such a workflow would also results in additional requirements in terms of the training of staff and the software and hardware infrastructure of weather and climate centres [71].

Also of note is the increased difficulty in extracting scientifically interesting information from the vast amounts of data produced by numerical models that is stored. The complexity and sheer size of many of these data hinder data dissemination and analysis and severely hamper efforts to analyze the data and address research goals. This emerging class of problems can be illustrated by the Coupled Model Intercomparison Project (CMIP) ensemble now in its’ sixth phase, which is expected to generate an estimated 40,000 TB of climate model data, a 20-fold increase in data volume from the previous phase [86, 84]. Many variables needed for analysis are effectively unavailable due the difficulty in saving or sharing the data. ML has the capacity to efficiently analyze large datasets as shown in section 2 and 3, but it has also been used to infer, for example, information about sub-surface currents [50, 155], eddy heat fluxes [99] and full 3-dimensional dynamics in CMIP6 [206]. ML in many forms has the potential to be highly valuable for researchers interested in the analysis of data that is increasingly large, potentially sparse, and partially unavailable for logistical reasons [85].

4.3 Enforcing physical priors in ML methods

When physical constraints are enforced within ML techniques, this is equivalent to incorporating physical understanding into the applications. Using statistical language, we can describe this process as “enforcing physical priors”. ML techniques backed by massive datasets have achieved groundbreaking results in vision, speech, and natural language processing, but they have yet to reach the physical oceanography community or largely the physical sciences in general. The ocean is governed by complex phenomena that have been studied by oceanographers for centuries, and taking advantage of this scientific heritage is one way of helping ML techniques reduce the search space of solutions, i.e. by guiding them using physical theories. This research direction is increasingly attracting attention as it helps constrain ML algorithms to be physically plausible and also facilitates the interpretation of the results by domain experts. There is a broad spectrum of techniques to supplement ML with physical constraints

[236], of which only the most directly relevant are discussed here.

The simplest way to enforce physical priors is through the loss function used to train the ML model. Concretely, this is done by adding an error term related to the physical constraint that needs to be respected, such as a conservation law. For example, if the output field in a regression problem need to be divergence-free, the term is added to the total loss function to ensure that the divergence of is close to zero. This approach has its mathematical roots in the theory of Lagrange multipliers. It can also be seen as a way of doing regularization, meaning that finding solutions that generalize well to unseen data is more likely. However, adding physical priors as terms in loss functions comes with a price, which is the problem of weighting different loss terms to impose which ones are most important. This problem of weighting can be solved using cross-validation techniques, but these could be prohibitive when the number of constraints is high.

A second strategy that has gained much attention in the recent years is enforcing the constraints directly in the mapping function used for learning. This strategy is best suited to NNs given their flexibility and the rich design choices that enable them to be tailored to specific data. The NN architecture is designed with the physical priors in mind. For example, if we already know that the quantity we want to find is a multiplication of two quantities, then we can encode this inside the neural net by creating two sub-networks whose outputs are multiplied in the last layer [88, 37].

While enforcing physical priors has been a very active area of research in the atmospheric community (see section 1.3), few papers investigating the potential of combining ML and physics can be found in the ocean science literature. In the following we cite some of these examples. Authors in [33] reconstruct subgrid eddy momentum forcing using ConvNets and found that enforcing a constraint on global momentum conservation can best be done by either postprocessing the ConvNet’s output or hardcoding a last layer in the ConvNet that removes the spatial mean of the data. [243] proposes to use an equation discovery algorithm, namely Relevance Vector Machines (RVM), for ocean eddy parameterizations. Few attempts have been made to forecast ocean variables using a mix of physical models and DL tools, notably in [26] where authors model an advection diffusion equation in a DL architecture used to forecast SST, while [82] tackle the same problem by combining an autoencoder with ideas from Lyapunov analysis, and [145], where a NN is embedded inside a one-layer quasi-geostrophic numerical model to reduce its bias towards a 3D ocean model.

Enforcing physical priors by solving differential equations with ML techniques is an active research direction that features the development of interesting tools for the ocean community, which are still under-exploited. Physically Informed Neural Networks (PINNs) [184] is a notable example of a technique that leverages the power of NNs to solve differential equations such as the incompressible Navier-Stokes equation [126]

without a need for mesh generation, which could accelerate model development. Other recent techniques for learning ordinary differential equations using either NNs

[52] or a combination of NNs and physical-based components [183] are a promising line of research at the interface of NNs and differential equations, which to the best of the authors’ knowledge has not yet been applied to ocean modeling.

5 From models to predictions

A basic goal and a test of the understanding of a physical process is the ability to predict its behaviour. Predictions of the weather for several days are a major geoscientific success. Such forecasts have improved with the increasing availability of computational power and observational networks, as well as better algorithms and process understanding [21]. However, predictions of the Earth system on longer timescales are still a major challenge. This is problematic, as predictions often form the basis of decision making. Understanding model error, and combining models with observations, is also at the core of supporting decision makers discussed in Section 6.

5.1 Model bias and model error

Bias and error in models are addressed through a systematic process of improvements in our understanding, but the needs of decision support can be immediate. Constraining simulations using observations is the process of data assimilation, covered below in Section 5.2. But where errors are recalcitrant, oceanographers and applied scientists in general use methods of “artificial” error reduction, driven by comparisons against data. A key early example related to the ocean’s role in climate is the use of “q-flux adjustments” or simply flux adjustments [154], where there was a persistent error in evaporative flux from the ocean surface. The adjustment to ameliorate this was a correction to restore energy balance to the coupled system by artificially adding a compensation term. This adjustment method fell into disfavor owing to its blatant “fudge factor” nature [203], although recent studies indicate that “flux adjusted” models continue to exhibit greater predictive skill [228].

When assessing a prediction from a model, the accuracy of the output can be assessed by comparing to a ’truth’ benchmark. Such a benchmark can for example be from observations or a target model representation of the system. Observations, although mostly not complete, constitute a best guess. This process can also identify “structural error”, also mentioned in Section 1.3, indicating that the model formulation itself is incorrect. Compared to observations, model outputs can show differences that cannot only be attributed to differences in initial conditions, but instead reflect errors within the model itself discussed in Section 5.2 below. Some of these errors can be explained by unresolved scales in the discretized version of numerical models, but model errors can also originate from incomplete physical knowledge. For example, within a sub-gridscale parameterisation the exact physics that need to be represented may be unclear, as discussed in section 4. Incomplete physical knowledge also impacts uncertainties in the parameters used, for example in the coupling terms between model components. Within model error as a whole, there may be a systematic component, which is referred to as model bias.

For post-processing of model output, statistical methods, related to ML, have been used to correct biases (for example [202, 140] or flux adjustments). Bias correction methods are used frequently in operational weather predictions with DL playing an ever-increasing role [187, 13, 104]. However, using downscaling as described in Section 1.3, ML can also be used to relate model output with local information, such as the local topography at very high resolution or observations that are available, to improve predictions when model simulations have already finished. Called up-scaling within the ML community, some of the mapping procedures used for downscaling, such as GANs, even allow for uncertainty quantification [143]. Within climate models, the LRP XAI method have successfully been used to identify key model biases for certain prediction tasks [15], with potential for application to the ocean. However, the LRP method application is still in its infancy.

5.2 Ocean data assimilation

5.2.1 Data assimilation methods: A brief history and main assumptions

Data assimilation (DA) is the process of constraining a theoretical representation of a system, usually using a numerical or statistical model, using a collection of observations. The results of this process typically include optimized estimates of (1) the time-evolving state of the system (sometimes called the “trajectory”), (2) initial conditions, (3) boundary conditions, and (4) other intrinsic model parameters (e.g. mixing coefficients). The optimization process typically consists of correcting the values of the initial conditions, boundary conditions, and model parameters in order to minimize a selected model-data misfit metric. To use the language of the theory of differential equations, one may think of DA as a set of methods for rigorously identifying which solution, among the family of solutions to a system of differential equations, best satisfies the given constraints.

Although there is a long history of DA in numerical weather prediction stretching across much of the 20th century, oceanographic DA only began in the late 1980s. The first experiments were regional [192], followed a few years later by the ambitious World Ocean Circulation Experiment (WOCE, [240]), and a community was subsequently assembled under the Global Ocean Data Assimilation Experiment (GODAE, [22]). These first DA approaches used in weather and ocean prediction were directly derived using optimal interpolation [100] and were based on strong assumptions, namely that the evolution model is linear and perfect and that the data error distribution is unbiased and well-represented by a Gaussian. In time, DA algorithms evolved to relax some of these assumptions, extending the scope of DA applications to the ocean.

The developments within DA have led to two main sets of techniques. These are ensemble approaches, of which the ensemble Kalman filter (EnKF) is a standard example, and variational approaches such as four dimensional variational assimilation (4DVar). Both classes of methods conceptually represent the abstract trajectory of the target system as a probability distribution across possible trajectories. EnKF constructs an ensemble of forecast states such that the ensemble mean and the sample covariance are expected to be the best estimates. A core assumption is that the ensemble probability distribution can be well-represented by a Gaussian function

[83]. The 4DVar method uses a linear model to calculate which perturbations to the initial conditions, boundary conditions, and parameters tend to increase the agreement between the time-evolving state of the model and the observational constraints [58].

Each of the DA classes of methods are used in their various flavours for both global or regional studies [159, 91, 229, 174, 144, 197]. DA is used routinely both in operational forecast and reanalysis mode. DA is used in the framework of several national and international projects. In no particular order, examples include the ECCO222Estimating the Circulation and Climate of the Ocean project, ECMWF333The European Centre for Medium-Range Weather Forecasts or the NOAA NCEP444National Oceanic and Atmospheric Administration, National Centers for Environmental Prediction Global Ocean Data Assimilation System (GODAS) in the USA.

In idealized comparisons between the two classes of methods, EnKF produces more accurate estimates for shorter assimilation windows, whereas 4DVar produces more accurate estimates when data constraints are sparse. For ocean applications, data is often sparse, making 4DVar attractive [130]. In practice, different DA approaches derived from optimal interpolation, 3DVar, the EnKF, or 4DVar are used [60]. The type depends on the application (e.g. short-term forecast or climate application), the available computing resources, the type of observations that are assimilated, and also likely the historical expertise in each group.

5.2.2 Model errors and ML within data assimilation

Historically, DA techniques mainly focus on the estimation of the state of the system, but the estimation of model error in the DA process is increasingly important  [47]. Several approaches that are used to handle model error apply DA frameworks that can be considered ML approaches [216, 55]. The estimation of model errors is particularly important if DA is being used to calculate forecasts over long timescales, i.e. from sub-seasonal to decadal scales. This is of particular importance for ocean forecasts, where timescales are longer than in the atmosphere; DA has been shown to be effective in this context [232].

5.2.3 Data assimilation and ML

Several studies have highlighted the connection between DA and ML [1, 30, 36, 95]. The connection is more direct with 4DVar, in which a function that quantifies model-data disagreement (i.e. a “cost function”) is minimized using a gradient descent algorithm, wherein an adjoint model calculates the gradient. In this perspective, 4DVar is approximately equivalent to the process of training of a neural network for regression. This is because the adjoint model can be seen as equivalent to the gradient backpropagation process [122, 137].

There are several ways ML can be used in combination with a DA framework. First, a data-driven model can be used to emulate a numerical model, partially or totally to provide the forecast. The objective is then to correct the model error, or to decrease the computational cost [146]. Note that emulation could become instrumental, since DA methods increasingly rely on ensemble runs, which are costly [46]. As DA allows one to bring the model and observations close enough together to represent the same physical situation, DA can in principle be used to extend the learning of parametrization to the learning of improved models directly from observations [38, 34], described further in the section 4. It is still unclear whether observations are too sparse for this approach to be successful within ocean modeling, in particular as the time period where dense observations are available is relatively short, as compared to the long timescales that are important for ocean dynamics. Another benefit of using an ML emulators arises because most ML tools, such as NNs, are easy to differentiate. Given the structure of NN (interconnected simple operators), and the libraries used to implement them, the computation of the gradient of the NN model is straightforward and efficient. This means that the computation can be used to efficiently develop tangent linear and adjoint model code, which is required for DA methods such as 4DVar [114]. This is noteworthy, because traditionally the development of tangent linear and adjoint models has required major efforts from the research community, either by manually coding an adjoint or by the semi-automatic process of algorithmic differentiation (e.g. [102]).

Second, ML can be instrumental in strongly coupled DA. Strongly coupled DA consists of correcting a coupled system (e.g. ocean-atmosphere) in a unified way. This allows, for example, atmospheric observations to constrain the ocean state and vice versa, which is not the case in uncoupled DA, where only ocean observations are used to constrain the ocean system. Strongly coupled DA is expected to be efficient but challenging due to the high variety of temporal and spatial scales [181]. In this sense, ML can be used to relax some strong assumptions of the DA algorithm (e.g. the assumption that the errors follow a Gaussian distribution), or to isolate relevant scales in observational and model states: a ML process can learn to compute the DA correction in optimal space. Some examples of this approach have been developed [7, 152, 105, 87], but so far none of them have been applied to realistic ocean DA setups.

Finally, ML can help deal with the mass of available observations. In section 2, we discussed how ML can help derive new type of products from observations. These new products are good candidates for inclusion in a DA system. ML can also be used to provide more accurate and/if or faster observation operators, for example to emulate satellite observations [34].

6 Discussion: towards a new synthesis of observations, theory, modeling, and prediction in ocean sciences using ML

6.1 The need for transparent ML

To increase confidence in the use of ML, stepping out of the “black box” is advisable. Towards this, having ML methods be transparent is very important. A transparent ML application is one where source of skill is known, or put differently why the ML came to its conclusion. Possibly the largest hurdles for ML adoption are a lack of trust and the difficulty of generalization. These two are linked, and if generalization is not reached trust is certainly not merited for ML applications within oceanography. Generalization refers to a model’s ability to properly adapt to new, previously unseen data. Within oceanography and beyond, the ideal generalization would come from the ML application learning the underlying physics. With a lack of good data coverage, the possibility underspecification [62] and shortcut learning [96] are important to keep in mind, where a model can seemingly perform well for example in the current climate but will fail in a future scenario as something physical was not learned. These issues are ubiquitous and not unique to oceanography or to Earth science, with a call for ‘physics informed’ ML[189]

. Accordingly, the field of ML already has, and is, developing methods to address these issues such as “few shot learning” and “transfer learning”. If it is possible to reliably quantify and account for the uncertainties associated with an ML application during training, this could also increase confidence in the model, but assessing the reliability may face challenges similar to underspecification. Recent works show promising progress toward this by learning a probability distribution of outcomes that can be stochastically sampled

[75],[106]. For uncertainty quantification, the uncertainty could be determined during training with ML, likely increasing the reliability of the results. Other methods such as regularisation, invariances, dimensionality reductions are also a powerful tool to increase the generalization skill. For climate applications, a key issue when training ML applications is that the system they are being trained on is largely non-stationary. This complicates the problem of generalization even further, but ML methods have demonstrated that having good generalization skills in a non-stationary context is possible [179]. Increasingly, the ML community is suggesting a focus on using IAI [195], driven among other things by the consistent racial and gender bias revealed in DL applications. With the ability to interpret the ML model itself, and intuitively discern if it is meaningful, the danger of introducing such bias is likely reduced dramatically. Similarly, XAI methods for example for NN, that retrospectively explain the source of ML predictive skill, can also help inspire confidence [160, 221, 78]. XAI methods such as layerwise relevance propagation (LRP [178, 9]) have been gaining traction within the atmosphere[16, 15, 39, 221], and ocean, but making their application explicitly appropriate to oceanography, and indeed the physical sciences in general, may require targeted method development.

6.2 Decision support

There is a need for accurate and actionable information about the ocean for a wide range of decision making. As noted above in Section 5.1, the need for actionable predictions and decision support can short-circuit the scientific process of error elimination. This is because the information may have “customers”/users with an immediate need: for example decisions on shorelines ranging from building seawalls, issuing housing permits, to setting insurance premiums. ML may play a role in bridging the gap between what model-based predictions are able to provide, and what users wish to know. The role of data-driven methods could be particularly important for in filling in the gaps where theory and models underspecify the system, potentially leaving considerable uncertainty as noted in Section 5.2.

The reliable quantification of uncertainties is often essential to support decision making. However, uncertainty quantification is often difficult for conventional approaches used in ocean science. This is because model errors cannot be described by physical equations or physical reasoning in most applications, and errors are often noisy and non-linear. On the other hand, model error can often be diagnosed against a reference truth, such as observations or target model simulations. Therefore, ML can be useful for the quantification of uncertainties. In particular as datasets from different data sources, on different reference grids and for different variables can be fused and compared using ML techniques. For example, ML can be used to post-process ensemble simulations [202], and Bayesian ML techniques can also be used to learn the uncertainty quantification together with the ML tool. In addition, targeted loss function design could help target, and lessen areas of uncertainty important for specific decisions. In an ML context, the models can be calibrated or tuned toward a particular loss function ([119]), and for specific decision support those metrics could be specifically those required for the particular application. An open area of research remains in relating the results that may be obtained from different calibrations (loss functions) of the same system trained on the same data.

ML can help to map model data and observations to predict or detect events for which we cannot provide a useful physical representation of the interactions. This could, for example, be a mapping from observational data of a time series in a specific local or observations from a buoy, to large-scale model data with the goal of making customised predictions of surface waves and local wind. Such data could for example be used for a sailing competition. Such tools based on ML could become essential for decision support, for example when used to predict sea levels [230]. ML based mapping tools could also be useful to inform where more observational data is needed, for example when deciding where to sample on a cruise or where to send autonomous platforms. To date, satellite images are largely used, but added guidance from ML techniques could be very valuable, particularly if sub-surface observations are the target [155, 49, 206]. ML may eventually be used to support observational campaigns in near-real time by interactively connecting networks of non-autonomous and autonomous observing platforms (e.g. gliders) to decision planning systems. These systems can take environmental conditions, target observations, and task scheduling into account. The vision is to have a “cyberinfrastructure” that can maximize the spatiotemporal coverage of the observations without a specific need for human intervention. The potential use of such observational planning and adjustment systems is being explored by international initiatives such as the Southern Ocean Observing System (SOOS, Similarly, for planing legislation, having knowledge of what is within a nations marine area and how this may connect to the surrounding ocean can be very valuable. Here ML has been used to provide actionable information [205], as the ocean does not adhere to borders drawn by humans.

Next to DL methods, the calibration of parameters is very important as many parameters within atmosphere and ocean models cannot be validated within their physical uncertainty range and need to be tuned [222, 54]. Given this physical uncertainty, using ML and DL in particular will likely be very valuable as noted in Section 4. If successful, such breakthroughs could help inform a wide range of decisions including those based on climate models such as CMIP, or in a more general sense. This is particularly the case for longer timescale integrations from seasonal and onward, due to the longer timescale active within the ocean.

An important component of supporting decision makers is communication. The ability to communicate effectively between the people that are making decisions and oceanographers can pose a problem. Oceanographers would need to be aware of what is useful information, and how to provide this. Decision makers largely may not have intimate knowledge of what available tools are capable of addressing, but mainly knowledge of the problem at hand. While seeming trivial, improving this line of communication is an important component of increasing the utility of oceanographic work.

6.3 Challenges and opportunities

In this review, we have highlighted some of the many challenges within observational, computational, and theoretical oceanography where ML offers an exciting opportunity to improve the speed and efficiency of conventional work and also to explore completely new avenues. As a merger of two distinct fields, there is ample opportunity to incorporate powerful, established ML methods that are largely new to oceanography as a field. While not without risk, the potential benefits of ML methods is creating increasing interest in the field. This review has presented some of the challenges and opportunities involved when leveraging ML techniques to improve the modelling, observing, fundamental understanding, and prediction of the ocean system.

ML applications fundamentally rely on the data available for learning, and here the ocean presents a unique challenge for ML applications. The important timescales in the ocean range from seconds to millennia, with strong interactions between processes across those scales. For example, a wind gust can trigger a phytoplankton bloom. Observations are largely sparse, noisy, and unbalanced. Temporally, very few long-timescale observations exist that span more than a few decades. A general problem with models of the ocean, either ML derived or more conventional, is that the system is highly non-stationary. With climate change, the mean state and its variance are liable to change, and a model that is trained from today’s data may not be general enough to accurately represent an ocean in a warmer climate. Other components of the Earth system such as land or atmospheric models, or GFD in general, also face similar challenges, but they are exsaserbated within oceanography due to the lack of spatial and temporal observational coverage.

ML offers many avenues with which the challenges listed above could be tackled. For example, with instantaneous processes (such as radiative transfer) or small spatial scale problems (for example eddy detection), a cross-validation approach with an associated independent test dataset could be fruitful. Indeed, cross-validation is widely advisable. On longer timescales, methods related to physical constraints would likely offer better results. Hybrid approaches for combining physics-driven models and ML models are becoming increasingly useful to aid the development of ocean models and to increase their computational efficiency on HPC platforms. Such ‘Neural Earth System Models’ (NESYM[124]

) can, for example, use ML for parameterization of sub-gridscale processes. Pairings of ML and conventional methods also show great promise for improving signal-to-noise ratios during training while also anchoring ML learning to a stronger physical foundation


Both the field of oceanography and ML are quickly evolving, and the computational tools available to implement ML techniques are also becoming increasingly accessible. With ample enthusiasm for ML applications to address oceanographic problems, it is also important to keep in mind that ML as a field is largely not concerned with the physical sciences. Approaching ML applications with caution and care is necessary to ensure meaningful results. The importance of increasing trust in ML methods also highlights a need for collaboration between oceanographers and ML domain experts. ML as a field is developing very swiftly, and promoting collaboration can help develop methods that are tailored to also suit the needs of oceanographers.

This review has outlined the recent advances and some remaining challenges associated with ML adoption within oceanography. As with any promising new set of methods, while there is ample opportunity, it is also worth noting that ML adoption also comes with risk. However, exploring the full potential and charting the limits of ML within oceanography is crucial and deserves considerable attention from the research community.


MS and VB acknowledge funding from the Cooperative Institute for Modeling the Earth System, Princeton University, under Award NA18OAR4320123 from the National Oceanic and Atmospheric Administration, U.S. Department of Commerce. RL and VB acknowledge funding from the French Government’s Make Our Planet Great Again program managed by the Agence National de Recherche under the “Investissements d’avenir” award ANR-17-MPGA-0010.

DJ acknowledges funding from a UKRI Future Leaders Fellowship (reference MR/T020822/1).

PD gratefully acknowledges funding from the Royal Society for his University Research Fellowship as well as the ESiWACE, MAELSTROM and AI4Copernicus under Horizon 2020 and the European High-Performance Computing Joint Undertaking (JU; grant agreement No 823988, 955513 and 101016798). The JU received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955513. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and United Kingdom, Germany, Italy, Luxembourg, Switzerland, Norway.

JB acknowledges funding from the project SFE(#2700733) of the Norwegian Research Council. Many thanks to Laurent Bertino (NERSC) for the insightful discussion about data assimilation.

The authors also wish to thank Youngrak Cho for invaluable help with Fig. 1 and 3.


  • [1] Henry DI Abarbanel, Paul J Rozdeba, and Sasha Shirman. Machine learning: Deepest learning as statistical data assimilation problems. Neural computation, 30(8):2025–2055, 2018.
  • [2] Cleveland Abbe. The physical basis of long-range weather forecasts. Monthly Weather Review, 29(12):551–561, 1901.
  • [3] Jan Ackmann, Peter D. Dueben, Tim N. Palmer, and Piotr K. Smolarkiewicz. Machine-learned preconditioners for linear solvers in geophysical fluid flows, 2020.
  • [4] Rilwan Adewoyin, Peter Dueben, Peter Watson, Yulan He, and Ritabrata Dutta. Tru-net: A deep learning approach to high resolution prediction of rainfall, 2021.
  • [5] N. Agarwal, D.Kondrashov, P. Dueben, E.Ryzhov, and P. Berloff. A comparison of data-driven approaches to build low-dimensional ocean models. Submitted to JAMES, 2021.
  • [6] Aida Alvera-Azcárate, Alexander Barth, Gaëlle Parard, and Jean-Marie Beckers. Analysis of smos sea surface salinity data using dineof. Remote sensing of environment, 180:137–145, 2016.
  • [7] Maddalena Amendola, Rossella Arcucci, Laetitia Mottet, Cesar Quilodran Casas, Shiwei Fan, Christopher Pain, Paul Linden, and Yi-Ke Guo. Data assimilation in the latent space of a neural network, 2020.
  • [8] S Aoki, Kaihe Yamazaki, Daisuke Hirano, K Katsumata, K Shimada, Y Kitade, H Sasaki, and H Murase. Reversal of freshening trend of antarctic bottom water in the australian-antarctic basin during 2010s. Scientific reports, 10:14415, 09 2020.
  • [9] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10(7):1–46, 07 2015.
  • [10] V. Balaji. Climbing down Charney’s ladder: machine learning and the post-Dennard era of computational climate science. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379(2194):20200085, April 2021. Publisher: Royal Society.
  • [11] V. Balaji, E. Maisonnave, N. Zadeh, B. N. Lawrence, J. Biercamp, U. Fladrich, G. Aloisio, R. Benson, A. Caubel, J. Durachta, M.-A. Foujols, G. Lister, S. Mocavero, S. Underwood, and G. Wright. CPMIP: measurements of real computational performance of Earth system models in CMIP6. Geoscientific Model Development, 10(1):19–34, 2017.
  • [12] V. Balaji, K. E. Taylor, M. Juckes, B. N. Lawrence, P. J. Durack, M. Lautenschlager, C. Blanton, L. Cinquini, S. Denvil, M. Elkington, F. Guglielmo, E. Guilyardi, D. Hassell, S. Kharin, S. Kindermann, S. Nikonov, A. Radhakrishnan, M. Stockhause, T. Weigel, and D. Williams. Requirements for a global data infrastructure in support of cmip6. Geoscientific Model Development, 11(9):3659–3680, 2018.
  • [13] Ágnes Baran, Sebastian Lerch, Mehrez El Ayari, and Sándor Baran. Machine learning for total cloud cover prediction. Neural Computing and Applications, 33(7):2605–2620, Jul 2020.
  • [14] Lorena A Barba and Rio Yokota. How will the fast multipole method fare in the exascale era. SIAM News, 46(6):1–3, 2013.
  • [15] Elizabeth A Barnes, Kirsten Mayer, Benjamin Toms, Zane Martin, and Emily Gordon. Identifying opportunities for skillful weather prediction with interpretable neural networks. arXiv preprint arXiv:2012.07830, 2020.
  • [16] Elizabeth A. Barnes, Benjamin Toms, James W. Hurrell, Imme Ebert-Uphoff, Chuck Anderson, and David Anderson. Indicator Patterns of Forced Change Learned by an Artificial Neural Network. Journal of Advances in Modeling Earth Systems, 12(9), 2020.
  • [17] T. P. Barnett and R. Preisendorfer. Origins and Levels of Monthly and Seasonal Forecast Skill for United States Surface Air Temperatures Determined by Canonical Correlation Analysis. Monthly Weather Review, 115(9):1825–1850, September 1987. Publisher: American Meteorological Society Section: Monthly Weather Review.
  • [18] Alexander Barth, Aida Alvera-Azcárate, Matjaz Licer, and Jean-Marie Beckers.

    Dincae 1.0: a convolutional neural network with error estimates to reconstruct sea surface temperature satellite observations.

    Geoscientific Model Development, 13(3):1609–1622, 2020.
  • [19] P. Bauer, P.D. Dueben, T. Hoefler, T. Quintino, T.C. Schulthess, and N.P. Wedi. The digital revolution of earth-system science. Nat Comput Sci, 1:104 – 113, 2021.
  • [20] Peter Bauer, Tiago Quintino, Nils Wedi, Antonino Bonanni, Marcin Chrust, Willem Deconinck, Michail Diamantakis, Peter Düben, Stephen English, Johannes Flemming, Paddy Gillies, Ioan Hadade, James Hawkes, Mike Hawkins, Olivier Iffrig, Christian Kühnlein, Michael Lange, Peter Lean, Olivier Marsden, Andreas Müller, Sami Saarinen, Domokos Sarmany, Michael Sleigh, Simon Smart, Piotr Smolarkiewicz, Daniel Thiemert, Giovanni Tumolo, Christian Weihrauch, Cristiano Zanna, and Pedro Maciel. The ecmwf scalability programme: Progress and plans. (857), 02 2020.
  • [21] Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction. Nature, 525(7567):47–55, 2015.
  • [22] Michael J. Bell, Michel Lefèbvre, Pierre-Yves Le Traon, Neville Smith, and Kirsten Wilmer-Becker. Godae: The global ocean data assimilation experiment. Oceanography, 22(3), September 2009.
  • [23] Rasmus E Benestad, Inger Hanssen-Bauer, and Deliang Chen. Empirical-Statistical Downscaling. World Scientific, 2008.
  • [24] Yoshua Bengio, Ian Goodfellow, and Aaron Courville. Deep learning, volume 1. MIT press Massachusetts, USA:, 2017.
  • [25] Tom Beucler, Michael Pritchard, Stephan Rasp, Jordan Ott, Pierre Baldi, and Pierre Gentine. Enforcing Analytic Constraints in Neural Networks Emulating Physical Systems. Physical Review Letters, 126(9):098302, March 2021. Publisher: American Physical Society.
  • [26] Emmanuel de Bezenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: incorporating prior scientific knowledge. Journal of Statistical Mechanics: Theory and Experiment, 2019(12):124009, December 2019.
  • [27] Gérard Biau and Erwan Scornet. A random forest guided tour. Test, 25(2):197–227, 2016.
  • [28] Christopher M Bishop. Pattern recognition and machine learning. springer, 2006.
  • [29] Vilhelm Bjerknes. Das problem der wettervorhers-age, betrachtet vom standpunkte der mechanik und der physik. Meteor. Z., 21:1–7, 1904.
  • [30] Marc Bocquet, Julien Brajard, Alberto Carrassi, and Laurent Bertino. Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models. Nonlinear Processes in Geophysics, 26(3):143–162, 2019.
  • [31] Lars Boehme and Isabella Rosso. Classifying oceanographic structures in the amundsen sea, antarctica. Geophysical Research Letters, 48(5):e2020GL089412, 2021. e2020GL089412 2020GL089412.
  • [32] Thomas Bolton and Laure Zanna. Applications of deep learning to ocean data inference and subgrid parameterization. Journal of Advances in Modeling Earth Systems, 11(1):376–399, 2019.
  • [33] Thomas Bolton and Laure Zanna. Applications of deep learning to ocean data inference and subgrid parameterization. Journal of Advances in Modeling Earth Systems, 11(1):376–399, 2019.
  • [34] Massimo Bonavita and Patrick Laloyaux. Machine learning for model error inference and correction. Journal of Advances in Modeling Earth Systems, page e2020MS002232, 2020.
  • [35] Sandrine Bony, Bjorn Stevens, Isaac H Held, John F Mitchell, Jean-Louis Dufresne, Kerry A Emanuel, Pierre Friedlingstein, Stephen Griffies, and Catherine Senior. Carbon dioxide and climate: perspectives on a scientific assessment. In Climate Science for Serving Society, pages 391–413. Springer, 2013.
  • [36] Julien Brajard, Alberto Carrassi, Marc Bocquet, and Laurent Bertino. Connections between data assimilation and machine learning to emulate a numerical model. In NCAR, editor, proceedings of the 9th International Workshop on Climate informatics. NCAR, 2019.
  • [37] Julien Brajard, Alberto Carrassi, Marc Bocquet, and Laurent Bertino. Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the Lorenz 96 model. Journal of Computational Science, 44:101171, 2020.
  • [38] Julien Brajard, Alberto Carrassi, Marc Bocquet, and Laurent Bertino. Combining data assimilation and machine learning to infer unresolved scale parametrisation. Phil. Trans. R. Soc. A, 379, 2021.
  • [39] Noah D. Brenowitz, Tom Beucler, Michael Pritchard, and Christopher S. Bretherton. Interpreting and Stabilizing Machine-Learning Parametrizations of Convection. Journal of the Atmospheric Sciences, 77(12):4357–4375, dec 2020.
  • [40] Noah D Brenowitz and Christopher S Bretherton. Prognostic validation of a neural network unified physics parameterization. Geophysical Research Letters, 45(12):6289–6298, 2018.
  • [41] Christopher S. Bretherton, Catherine Smith, and John M. Wallace. An Intercomparison of Methods for Finding Coupled Patterns in Climate Data. Journal of Climate, 5(6):541–560, June 1992. Publisher: American Meteorological Society Section: Journal of Climate.
  • [42] Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, April 2016.
  • [43] Kirk Bryan. A numerical method for the study of the circulation of the world ocean. Journal of computational physics, 135(2):154–169, 1997.
  • [44] Kirk Bryan and Michael D Cox. A nonlinear model of an ocean driven by wind and differential heating: Part i. description of the three-dimensional velocity and density fields. Journal of Atmospheric Sciences, 25(6):945–967, 1968.
  • [45] Jared L. Callaham, James V. Koch, Bingni W. Brunton, J. Nathan Kutz, and Steven L. Brunton. Learning dominant physical processes with data-driven balance models. Nature Communications, 12(1), Feb 2021.
  • [46] Alberto Carrassi, Marc Bocquet, Laurent Bertino, and Geir Evensen. Data assimilation in the geosciences: An overview of methods, issues, and perspectives. Wiley Interdisciplinary Reviews: Climate Change, 9(5):e535, 2018.
  • [47] Alberto Carrassi and Stéphane Vannitsem. Accounting for model error in variational data assimilation: A deterministic formulation. Monthly Weather Review, 138(9):3369–3386, 2010.
  • [48] David Cartwright. On The Origins Of Knowledge Of The Sea Tides From Antiquity To The Thirteenth Century. Earth Sciences History, 20(2):105–126, November 2007.
  • [49] Christopher Chapman and Anastase Alexandre Charantonis. Reconstruction of Subsurface Velocities From Satellite Observations Using Iterative Self-Organizing Maps. IEEE Geoscience and Remote Sensing Letters, 14(5):617–620, May 2017.
  • [50] Christopher Chapman and Anastase Alexandre Charantonis. Reconstruction of subsurface velocities from satellite observations using iterative self-organizing maps. IEEE Geoscience and Remote Sensing Letters, 14(5):617–620, 2017.
  • [51] Christopher Chapman, Mary-Anne Lea, Amelie Meyer, Jean-baptiste Sallée, and Mark Hindell. Defining southern ocean fronts and their influence on biological and physical processes in a changing climate. Nature Climate Change, 10:1–11, 02 2020.
  • [52] Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
  • [53] Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra. Notes from the ai frontier insights from hundreds of use cases. Discussion Paper, 2018.
  • [54] Emmet Cleary, Alfredo Garbuno-Inigo, Shiwei Lan, Tapio Schneider, and Andrew M. Stuart. Calibrate, emulate, sample. Journal of Computational Physics, 424:109716, 2021.
  • [55] Tadeo J Cocucci, Manuel Pulido, Magdalena Lucini, and Pierre Tandeo.

    Model error covariance estimation in particle and ensemble kalman filters using an online expectation–maximization algorithm.

    Quarterly Journal of the Royal Meteorological Society, 147(734):526–543, 2021.
  • [56] Fenwick C. Cooper and Laure Zanna. Optimisation of an idealised ocean model, stochastic parameterisation of sub-grid eddies. Ocean Modelling, 88:38–53, 2015.
  • [57] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.
  • [58] Philippe Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational implementation of 4d-var, using an incremental approach. Quarterly Journal of the Royal Meteorological Society, 120(519):1367–1387, 1994.
  • [59] Fleur Couvreux, Frédéric Hourdin, Danny Williamson, Romain Roehrig, Victoria Volodina, Najda Villefranque, Catherine Rio, Olivier Audouin, James Salter, eric bazile, Florent Brient, Florence Favot, Rachel Honnert, Marie-Pierre Lefebvre, Jean-Baptiste Madeleine, Quentin Rodier, and Wenzhe Xu. Process-based climate model development harnessing machine learning: I. a calibration tool for parameterization improvement, July 2020. Archive Location: world Publisher: Earth and Space Science Open Archive Section: Atmospheric Sciences.
  • [60] James Cummings, Laurent Bertino, Pierre Brasseur, Ichiro Fukumori, Masafumi Kamachi, Matthew J Martin, Kristian Mogensen, Peter Oke, Charles Emmanuel Testut, Jacques Verron, et al. Ocean data assimilation systems for godae. Oceanography, 22(3):96–109, 2009.
  • [61] George Cybenko.

    Approximation by superpositions of a sigmoidal function.

    Mathematics of control, signals and systems, 2(4):303–314, 1989.
  • [62] Alexander D’Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, and D. Sculley. Underspecification presents challenges for credibility in modern machine learning, 2020.
  • [63] Mike Davis. Late Victorian Holocausts. Verso, 2001.
  • [64] Margaret Deacon. Scientists and the Sea, 1650–1900: A Study of Marine Science. Routledge, April 2018. Google-Books-ID: WRdWDwAAQBAJ.
  • [65] Anna Denvil-Sommer, Marion Gehlen, Mathieu Vrac, and Carlos Mejia. Lsce-ffnn-v1: a two-step neural network model for the reconstruction of surface ocean pco 2 over the global ocean. Geoscientific Model Development, 12(5):2091–2105, 2019.
  • [66] Damien Desbruyères, Léon Chafik, and Guillaume Maze. A shift in the ocean circulation has warmed the subpolar north atlantic ocean since 2016. Communications Earth & Environment, 2, 02 2021.
  • [67] Chris Ding and Xiaofeng He. ¡i¿k¡/i¿-means clustering via principal component analysis. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04, page 29, New York, NY, USA, 2004. Association for Computing Machinery.
  • [68] Hui Ding, Matthew Newman, Michael A. Alexander, and Andrew T. Wittenberg. Diagnosing Secular Variations in Retrospective ENSO Seasonal Forecast Skill Using CMIP5 Model-Analogs. Geophysical Research Letters, 46(3):1721–1730, 2019.
  • [69] Keith W. Dixon, John R. Lanzante, Mary Jo Nath, Katharine Hayhoe, Anne Stoner, Aparna Radhakrishnan, V. Balaji, and Carlos F. Gaitán. Evaluating the stationarity assumption in statistically downscaled climate projections: is past performance an indicator of future results? Climatic Change, pages 1–14, 2016.
  • [70] H. M. Van Den Dool. Searching for analogues, how long must we wait? Tellus A, 46(3):314–324, 1994. _eprint:
  • [71] Peter Düben, Umberto Modigliani, Alan Geer, Stephan Siemen, Florian Pappenberger, Peter Bauer, Andy Brown, Martin Palkovic, Baudouin Raoult, Nils Wedi, and Vasileios Baousis. Machine learning at ecmwf: A roadmap for the next 10 years, 01 2021.
  • [72] P. D. Dueben and P. Bauer. Challenges and design choices for global weather and climate models based on machine learning. Geoscientific Model Development, 11(10):3999–4009, 2018.
  • [73] Peter D. Dueben and T. N. Palmer. Benchmark tests for numerical weather forecasts on inexact hardware. Monthly Weather Review, 142(10):3809 – 3829, 01 Oct. 2014.
  • [74] Pierre Maurice Marie Duhem. La Théorie Physique : Son Objet Et Sa Structure. Chevalier & Rivière, 1906.
  • [75] Oliver R. A. Dunbar, Alfredo Garbuno-Inigo, Tapio Schneider, and Andrew M. Stuart. Calibration and Uncertainty Quantification of Convective Parameters in an Idealized GCM. arXiv:2012.13262 [math, stat], December 2020. arXiv: 2012.13262.
  • [76] D. I. Duncan, P. Eriksson, S. Pfreundschuh, C. Klepp, and D. C. Jones. On the distinctiveness of observed oceanic raindrop distributions. Atmospheric Chemistry and Physics, 19(10):6969–6984, 2019.
  • [77] Dale Durran. Numerical Methods for Fluid Dynamics With Applications to Geophysics, volume 32. Springer-Verlag, 01 2010.
  • [78] Imme Ebert-Uphoff and Kyle Hilburn. Evaluation, tuning and interpretation of neural networks for working with images in meteorological applications. Bulletin of the American Meteorological Society, pages 1 – 47, 31 Aug. 2020.
  • [79] Carsten Eden and Dirk Olbers. Why western boundary currents are diffusive: A link between bottom pressure torque and bolus velocity. Ocean Modelling, 32(1-2):14–24, 2010.
  • [80] Paul Edwards. A vast machine: computer models, climate data, and the politics of global warming. The MIT Press, 2010.
  • [81] V.W. Ekman. On the influence of the earth’s rotation on ocean currents. Arch. Math. Astron. Phys., 2 (11), 1905.
  • [82] N Benjamin Erichson, Michael Muehlebach, and Michael W Mahoney. Physics-informed autoencoders for lyapunov-stable fluid flow prediction. arXiv preprint arXiv:1905.10866, 2019.
  • [83] Geir Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5):10143–10162, 1994.
  • [84] Veronika Eyring, Sandrine Bony, Gerald Meehl, C. Senior, B. Stevens, Stouffer Ronald, and K. Taylor. Overview of the coupled model intercomparison project phase 6 (cmip6) experimental design and organisation. Geoscientific Model Development Discussions, 8:10539–10583, 12 2015.
  • [85] Veronika Eyring, Peter Cox, Gregory Flato, Peter Gleckler, Gab Abramowitz, Peter Caldwell, William Collins, Bettina Gier, Alex Hall, Forrest Hoffman, George Hurtt, Alexandra Jahn, Chris Jones, Stephen Klein, John Krasting, Lester Kwiatkowski, Ruth Lorenz, Eric Maloney, Gerald Meehl, and Mark Williamson. Taking climate model evaluation to the next level. Nature Climate Change, 9:102–110, 02 2019.
  • [86] Veronika Eyring, Peter J Gleckler, Christoph Heinze, Ronald J Stouffer, Karl E Taylor, V Balaji, Eric Guilyardi, Sylvie Joussaume, Stephan Kindermann, Bryan N Lawrence, Gerald A Meehl, Mattia Righi, and Dean N Williams. Towards improved and more routine Earth system model evaluation in CMIP. Earth System Dynamics, 7(4):813–830, 2016.
  • [87] Ronan Fablet, Bertrand Chapron, Lucas Drumetz, Etienne Mémin, Olivier Pannekoucke, and François Rousseau. Learning variational data assimilation models and solvers. arXiv preprint arXiv:2007.12941, 2020.
  • [88] Ronan Fablet, Said Ouala, and Cedric Herzet. Bilinear residual neural network for the identification and forecasting of dynamical systems. arXiv preprint arXiv:1712.07003, 2017.
  • [89] Raffaele Ferrari, James C McWilliams, Vittorio M Canuto, and Mikhail Dubovikov. Parameterization of eddy fluxes near oceanic boundaries. Journal of Climate, 21(12):2770–2789, 2008.
  • [90] Eunice Foote. ircumstances affecting the heat of the sun’s rays: Art. xxxi. American Journal of Science and Arts, XXII/no. LXVI:357–359, 1856.
  • [91] G Forget, J M Campin, P Heimbach, C N Hill, R M Ponte, and C Wunsch. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation. Geoscientific Model Development, 8(10):3071–3104, 2015.
  • [92] B Fox-Kemper, S Bachman, B Pearson, and S Reckinger. Principles and advances in subgrid modelling for eddy-rich simulations. Clivar Exchanges, 19(2):42–46, 2014.
  • [93] Rachel Furner, Peter Haynes, Dave Munday, Brooks Paige, Daniel C. Jones, and Emily Shuckburgh. Sensitivity analysis of a data-driven model of ocean temperature. Geoscientific Model Development, 2021.
  • [94] Alberto Garabato. A perspective on the future of physical oceanography. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 370:5480–511, 12 2012.
  • [95] AJ Geer. Learning earth system models from observations: machine learning or data assimilation? Philosophical Transactions of the Royal Society A, 379(2194):20200089, 2021.
  • [96] Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665–673, Nov 2020.
  • [97] Peter R. Gent and James C. Mcwilliams. Isopycnal mixing in ocean circulation models. Journal of Physical Oceanography, 20(1):150 – 155, 01 Jan. 1990.
  • [98] P. Gentine, M. Pritchard, S. Rasp, G. Reinaudi, and G. Yacalis. Could machine learning break the convection parameterization deadlock? Geophysical Research Letters, 45(11):5742–5751, 2018.
  • [99] Tom George, Georgy Manucharyan, and Andrew Thompson. Deep learning to infer eddy heat fluxes from sea surface height patterns of mesoscale turbulence, 11 2019.
  • [100] Michael Ghil and Paola Malanotte-Rizzoli. Data assimilation in meteorology and oceanography. Advances in geophysics, 33:141–266, 1991.
  • [101] Subimal Ghosh and P. P. Mujumdar. Statistical downscaling of GCM simulations to streamflow using relevance vector machine. Advances in Water Resources, 31(1):132–146, January 2008.
  • [102] Ralf Giering and Thomas Kaminski. Recipes for adjoint code construction. ACM Transactions on Mathematical Software (TOMS), 24(4):437–474, 1998.
  • [103] Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. arXiv preprint arXiv:1406.2661, 2014.
  • [104] Peter Groenquist, Chengyuan Yao, Tal Ben-Nun, Nikoli Dryden, Peter Dueben, Shigang Li, and Torsten Hoefler. Deep learning for post-processing ensemble weather forecasts. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379(2194):20200092, 2021.
  • [105] Ian Grooms. Analog ensemble data assimilation and a method for constructing analogs with variational autoencoders. Quarterly Journal of the Royal Meteorological Society, 147(734):139–149, 2021.
  • [106] Arthur Guillaumin and Laure Zanna. Stochastic Deep Learning parameterization of Ocean Momentum Forcing, August 2021. Archive Location: world Publisher: Earth and Space Science Open Archive Section: Oceanography.
  • [107] Sébastien Guimbard, Jérôme Gourrion, Marcos Portabella, Antonio Turiel, Carolina Gabarró, and Jordi Font. Smos semi-empirical ocean forward model adjustment. IEEE transactions on geoscience and remote sensing, 50(5):1676–1687, 2012.
  • [108] Madec Gurvan, Romain Bourdallé-Badie, Jérôme Chanut, Emanuela Clementi, Andrew Coward, Christian Ethé, Doroteaciro Iovino, Dan Lea, Claire Lévy, Tomas Lovato, Nicolas Martin, Sébastien Masson, Silvia Mocavero, Clément Rousset, Dave Storkey, Martin Vancoppenolle, Simon Müeller, George Nurser, Mike Bell, and Guillaume Samson. Nemo ocean engine, October 2019. Add SI3 and TOP reference manuals.
  • [109] T. Gysi, C. Osuna, O. Fuhrer, M. Bianco, and T. C. Schulthess. Stella: a domain-specific tool for structured grid methods in weather and climate models. In SC ’15: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–12, 2015.
  • [110] Thomas Haine. What did the viking discoverers of america know of the north atlantic environment? Weather, 63(3):60–65, 2008.
  • [111] Yoo-Geun Ham, Jeong-Hwan Kim, and Jing-Jia Luo. Deep learning for multi-year enso forecasts. Nature, 573(7775):568–572, 2019.
  • [112] K. Hanawa and L. Talley. Ocean circulation and climate, International Geophysics Series. International Geophysics Series (pp. 373–386). Cambridge, MA, USA: Academic Press, 2001.
  • [113] Sam Hatfield, Matthew Chantry, Peter Düben, and Tim Palmer. Accelerating high-resolution weather models with deep-learning hardware. In Proceedings of the Platform for Advanced Scientific Computing Conference, PASC ’19, New York, NY, USA, 2019. Association for Computing Machinery.
  • [114] Samuel Edward Hatfield, Matthew Chantry, Peter Dominik Dueben, Philippe Lopez, Alan Jon Geer, and Tim N Palmer. Building tangent-linear and adjoint models for data assimilation with neural networks. Earth and Space Science Open Archive, page 34, 2021.
  • [115] W Hazeleger and SS Drijfhout. Eddy subduction in a model of the subtropical gyre. Journal of physical oceanography, 30(4):677–695, 2000.
  • [116] Isaac Held. The gap between simulation and understanding in climate modeling. 86(11):1609–1614, September 2005.
  • [117] B Helland-Hansen. Nogen hydrografiske metoder. forhandlinger ved de 16 skandinaviske naturforsherermøte. Kristiania, 39:357–359, 1916.
  • [118] Isabel A. Houghton and James D. Wilson. El niño detection via unsupervised clustering of argo temperature profiles. Journal of Geophysical Research: Oceans, 125(9):e2019JC015947, 2020. e2019JC015947 10.1029/2019JC015947.
  • [119] Frederic Hourdin, Thorsten Mauritsen, Andrew Gettelman, Jean-Christophe Golaz, Venkatramani Balaji, Qingyun Duan, Doris Folini, Duoying Ji, Daniel Klocke, Yun Qian, et al. The art and science of climate model tuning. Bulletin of the American Meteorological Society, 98(3):589–602, 2017.
  • [120] Frédéric Hourdin, Danny Williamson, Catherine Rio, Fleur Couvreux, Romain Roehrig, Najda Villefranque, Ionela Musat, Fatoumata Bint Diallo, Laurent Fairhead, and Victoria Volodina. Process-based climate model development harnessing machine learning: II. model calibration from single column to global, May 2020. Archive Location: world Publisher: Earth and Space Science Open Archive Section: Atmospheric Sciences.
  • [121] William W Hsieh. Machine learning methods in the environmental sciences: Neural networks and kernels. Cambridge university press, 2009.
  • [122] William W Hsieh and Benyang Tang. Applying neural network models to prediction and data analysis in meteorology and oceanography. Bulletin of the American Meteorological Society, 79(9):1855–1870, 1998.
  • [123] Chris W Hughes and Beverly A De Cuevas. Why western boundary currents in realistic oceans are inviscid: A link between form stress and bottom pressure torques. Journal of Physical Oceanography, 31(10):2871–2885, 2001.
  • [124] Christopher Irrgang, Niklas Boers, Maike Sonnewald, Elizabeth A. Barnes, Christopher Kadow, Joanna Staneva, and Jan Saynisch-Wagner. Will artificial intelligence supersede earth system and climate models?, 2021.
  • [125] Christopher Irrgang, Jan Saynisch, and Maik Thomas. Estimating global ocean heat content from tidal magnetic satellite observations. Scientific Reports, 9:7893, 2019.
  • [126] Xiaowei Jin, Shengze Cai, Hui Li, and George Em Karniadakis. Nsfnets (navier-stokes flow nets): Physics-informed neural networks for the incompressible navier-stokes equations. Journal of Computational Physics, 426:109951, 2021.
  • [127] Gregory C Johnson and Harry L Bryden. On the size of the antarctic circumpolar current. Deep Sea Research Part A. Oceanographic Research Papers, 36(1):39–53, 1989.
  • [128] Daniel C. Jones, Harry J. Holt, Andrew J. S. Meijers, and Emily Shuckburgh. Unsupervised Clustering of Southern Ocean Argo Float Temperature Profiles. Journal of Geophysical Research: Oceans, 124(1):390–402, 2019.
  • [129] Daniel C. Jones and Takamitsu Ito. Gaussian mixture modeling describes the geography of the surface ocean carbon budget. In J. Brajard, A. Charantonis, C. Chen, and J. Runge, editors, Proceedings of the 9th International Workshop on Climate Informatics: CI 2019, pages 108–113. University Corporation for Atmospheric Research (UCAR), 2019.
  • [130] Eugenia Kalnay, Hong Li, Takemasa Miyoshi, Shu-Chih Yang, and Joaquim Ballabrera-Poy. 4-d-var or ensemble kalman filter? Tellus A: Dynamic Meteorology and Oceanography, 59(5):758–773, 2007.
  • [131] Maria T. Kavanaugh, Matthew J. Oliver, Francisco P. Chavez, Ricardo M. Letelier, Frank E. Muller-Karger, and Scott C. Doney. Seascapes as a new vernacular for pelagic ocean monitoring, management and conservation. ICES Journal of Marine Science, 73(7):1839–1850, July 2016.
  • [132] Jeffrey R Key and Axel J Schweiger. Tools for atmospheric radiative transfer: Streamer and fluxnet. Computers & Geosciences, 24(5):443–451, 1998.
  • [133] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [134] M. Kloewer, P. D. Dueben, and T. N. Palmer. Number formats, error mitigation, and scope for 16-bit arithmetics in weather and climate modeling analyzed with a shallow water model. Journal of Advances in Modeling Earth Systems, 12(10):e2020MS002246, 2020. e2020MS002246 10.1029/2020MS002246.
  • [135] Dmitrii Kochkov, Jamie A. Smith, Ayya Alieva, Qing Wang, Michael P. Brenner, and Stephan Hoyer. Machine learning accelerated computational fluid dynamics, 2021.
  • [136] Teuvo Kohonen. Self-organized formation of topologically correct feature maps. Biological cybernetics, 43(1):59–69, 1982.
  • [137] Nikola Borislavov Kovachki and Andrew M Stuart. Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks. Inverse Problems, 2019.
  • [138] Mark A Kramer. Nonlinear principal component analysis using autoassociative neural networks. AIChE journal, 37(2):233–243, 1991.
  • [139] T. Kurth, S. Treichler, J. Romero, M. Mudigonda, N. Luehr, E. Phillips, A. Mahesh, M. Matheson, J. Deslippe, M. Fatica, P. Prabhat, and M. Houston. Exascale deep learning for climate analytics. In SC18: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 649–660, 2018.
  • [140] P. Laloyaux, M. Bonavita, M. Chrust, and S. Gürol. Exploring the potential and limitations of weak-constraint 4d-var. Quarterly Journal of the Royal Meteorological Society, 146(733):4067–4082, 2020.
  • [141] PY Le Traon, F Nadal, and N Ducet. An improved mapping method of multisatellite altimeter data. Journal of atmospheric and oceanic technology, 15(2):522–534, 1998.
  • [142] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.
  • [143] J. Leinonen, D. Nerini, and A. Berne.

    Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network.

    IEEE Transactions on Geoscience and Remote Sensing, pages 1–13, 2020.
  • [144] Jean-Michel Lellouche, Eric Greiner, Olivier Le Galloudec, Charly Regnier, Mounir Benkiran, Charles-Emmanuel Testut, Romain Bourdalle-Badie, Marie Drevillon, Gilles Garric, and Yann Drillet. Mercator ocean global high-resolution monitoring and forecasting system. New Frontiers in Operational Oceanography, pages 563–592, 2018.
  • [145] Redouane Lguensat, Julien Le Sommer, Sammy Metref, Emmanuel Cosme, and Ronan Fablet. Learning generalized quasi-geostrophic models using deep neural numerical models. arXiv preprint arXiv:1911.08856, 2019.
  • [146] Redouane Lguensat, Pierre Tandeo, Pierre Ailliot, Manuel Pulido, and Ronan Fablet. The analog data assimilation. Monthly Weather Review, 145(10):4093–4107, 2017.
  • [147] Julia Ling, Reese Jones, and Jeremy Templeton. Machine learning strategies for systems with invariance properties. Journal of Computational Physics, 318:22–35, August 2016.
  • [148] Edward N. Lorenz. The slow manifold—what is it? Journal of Atmospheric Sciences, 49(24):2449 – 2451, 15 Dec. 1992.
  • [149] Edward N Lorenz. Empirical orthogonal functions and statistical weather prediction, 1956.
  • [150] Edward N Lorenz. Atmospheric predictability as revealed by naturally occurring analogues. Journal of Atmospheric Sciences, 26(4):636–646, 1969.
  • [151] Edward N. Lorenz. Atmospheric predictability as revealed by naturally occurring analogues. Journal of the Atmospheric Sciences, 26(4):636–646, Jul 1969.
  • [152] Julian Mack, Rossella Arcucci, Miguel Molina-Solana, and Yi-Ke Guo. Attention-based convolutional autoencoders for 3d-variational data assimilation. Computer Methods in Applied Mechanics and Engineering, 372:113291, 2020.
  • [153] S. Manabe and K. Bryan. Climate calculations with a combined ocean-atmosphere model. J. Atmos. Sci, 26(4):786–789, 1969.
  • [154] S. Manabe, R. J. Stouffer, M. J. Spelman, and K. Bryan. Transient Responses of a Coupled Ocean–Atmosphere Model to Gradual Changes of Atmospheric CO2. Part I. Annual Mean Response. Journal of Climate, 4(8):785–818, August 1991. Publisher: American Meteorological Society Section: Journal of Climate.
  • [155] Georgy E. Manucharyan, Lia Siegelman, and Patrice Klein. A deep learning approach to spatiotemporal sea surface height interpolation and estimation of deep currents in geostrophic ocean turbulence. Journal of Advances in Modeling Earth Systems, 13(1):e2019MS001965, 2021. e2019MS001965 2019MS001965.
  • [156] Elodie Martinez, Thomas Gorgues, Matthieu Lengaigne, Clement Fontana, Raphaëlle Sauzède, Christophe Menkes, Julia Uitz, Emanuele Di Lorenzo, and Ronan Fablet. Reconstructing global chlorophyll-a variations using a non-linear statistical approach. Frontiers in Marine Science, 7:464, 2020.
  • [157] E. P. Maurer, H. G. Hidalgo, T. Das, M. D. Dettinger, and D. R. Cayan. The utility of daily large-scale climate data in the assessment of climate change impacts on daily streamflow in California. Hydrology and Earth System Sciences, 14(6):1125–1138, June 2010. Publisher: Copernicus GmbH.
  • [158] Guillaume Maze, Herlé Mercier, Ronan Fablet, Pierre Tandeo, Manuel López Radcenco, Philippe Lenca, Charlène Feucher, and le goff Clement. Coherent heat patterns revealed by unsupervised classification of argo temperature profiles in the north atlantic ocean. Progress in Oceanography, 151, 01 2017.
  • [159] Matthew R Mazloff, Patrick Heimbach, and Carl Wunsch. An Eddy-Permitting Southern Ocean State Estimate. Journal of Physical Oceanography, 40(5):880–899, 2010.
  • [160] Amy McGovern, Ryan Lagerquist, David John Gagne, G. Eli Jergensen, Kimberly L. Elmore, Cameron R. Homeyer, and Travis Smith. Making the Black Box More Transparent: Understanding the Physical Implications of Machine Learning. Bulletin of the American Meteorological Society, 100(11):2175–2199, November 2019. Publisher: American Meteorological Society.
  • [161] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
  • [162] Geoffrey J McLachlan and Kaye E Basford. Mixture models: Inference and applications to clustering, volume 38. M. Dekker New York, 1988.
  • [163] A. Merz and G Wust. Die atlantische vertikal zirkulation. 3 Beitrag. Zeitschr. D.G.F.E, Berlin, 1923.
  • [164] Adam Monahan, John Fyfe, Maarten Ambaum, David Stephenson, and Gerald North. Empirical orthogonal functions: The medium is the message. Journal of Climate, 22, 12 2009.
  • [165] Álvaro Montenegro, Richard T. Callaghan, and Scott M. Fitzpatrick. Using seafaring simulations and shortest-hop trajectories to model the prehistoric colonization of remote oceania. Proceedings of the National Academy of Sciences, 2016.
  • [166] Douglas C Montgomery, Elizabeth A Peck, and G Geoffrey Vining.

    Introduction to linear regression analysis

    John Wiley & Sons, 2021.
  • [167] Walter H. Munk. On the wind-driven ocean circulation. Journal of Atmospheric Sciences, 7(2):80 – 93, 01 Apr. 1950.
  • [168] Walter H Munk. On the wind-driven ocean circulation. Journal of meteorology, 7(2):80–93, 1950.
  • [169] Walter Heinrich Munk and E Palmén. Note on the dynamics of the antarctic circumpolar current 1. Tellus, 3(1):53–55, 1951.
  • [170] Zied Ben Mustapha, Séverine Alvain, Cédric Jamet, Hubert Loisel, and David Dessailly. Automatic classification of water-leaving radiance anomalies from global seawifs imagery: application to the detection of phytoplankton groups in open ocean waters. Remote Sensing of Environment, 146:97–112, 2014.
  • [171] Jerome Namias. Recent seasonal interactions between north pacific waters and the overlying atmospheric circulation. Journal of Geophysical Research, 64(6):631–646, 1959.
  • [172] Frederik Nebeker. Calculating the weather: Meteorology in the 20th century. Elsevier, 1995.
  • [173] Philipp Neumann, Peter Düben, Panagiotis Adamidis, Peter Bauer, Matthias Brück, Luis Kornblueh, Daniel Klocke, Bjorn Stevens, Nils Wedi, and Joachim Biercamp. Assessing the scales in numerical weather and climate predictions: will exascale be the rescue? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 377(2142):20180148, April 2019.
  • [174] An T Nguyen, Helen Pillar, Victor Ocaña, Arash Bigdeli, Timothy A Smith, and Patrick Heimbach. The arctic subpolar gyre state estimate (aste): Description and assessment of a data-constrained, dynamically consistent; ocean-sea ice estimate for 2002-2017. Earth and Space Science Open Archive, page 62, 2020.
  • [175] Peer Nowack, Peter Braesicke, Joanna Haigh, Nathan Luke Abraham, John Pyle, and Apostolos Voulgarakis. Using machine learning to build temperature-based ozone parameterizations for climate sensitivity simulations. Environmental Research Letters, 13, 2018.
  • [176] AM Obukhov. Statistically homogeneous fields on a sphere. Usp. Mat. Nauk, 2(2):196–198, 1947.
  • [177] Paul A O’Gorman and John G Dwyer. Using machine learning to parameterize moist convection: Potential for modeling of climate, climate change, and extreme events. Journal of Advances in Modeling Earth Systems, 10(10):2548–2563, 2018.
  • [178] Julian D Olden, Michael K Joy, and Russell G Death. An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling, 178(3):389 – 397, 2004.
  • [179] Dhruvit Patel, Daniel Canaday, Michelle Girvan, Andrew Pomerance, and Edward Ott. Using machine learning to predict statistical properties of non-stationary dynamical processes: System climate,regime transitions, and the effect of stochasticity. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(3):033149, 2021.
  • [180] Etienne Pauthenet, Fabien Roquet, Gurvan Madec, Jean-Baptiste Sallée, and David Nerini. The thermohaline modes of the global ocean. Journal of Physical Oceanography, 49(10):2535 – 2552, 01 Oct. 2019.
  • [181] SG Penny, E Bach, K Bhargava, C-C Chang, C Da, L Sun, and T Yoshida. Strongly coupled data assimilation in multiscale media: Experiments using a quasi-geostrophic coupled model. Journal of Advances in Modeling Earth Systems, 11(6):1803–1829, 2019.
  • [182] Norman A Phillips. The general circulation of the atmosphere: A numerical experiment. Quarterly Journal of the Royal Meteorological Society, 82(352):123–164, 1956.
  • [183] Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, Ali Ramadhan, and Alan Edelman. Universal Differential Equations for Scientific Machine Learning. arXiv:2001.04385 [cs, math, q-bio, stat], August 2020. arXiv: 2001.04385.
  • [184] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017.
  • [185] Ali Ramadhan, Gregory LeClaire Wagner, Chris Hill, Jean-Michel Campin, Valentin Churavy, Tim Besard, Andre Souza, Alan Edelman, Raffaele Ferrari, and John Marshall. Oceananigans.jl: Fast and friendly geophysical fluid dynamics on gpus.

    Journal of Open Source Software

    , 5(53):2018, 2020.
  • [186] Stephan Rasp, Peter D. Dueben, Sebastian Scher, Jonathan A. Weyn, Soukayna Mouatadid, and Nils Thuerey. Weatherbench: A benchmark data set for data-driven weather forecasting. Journal of Advances in Modeling Earth Systems, 12(11):e2020MS002203, 2020. e2020MS002203 10.1029/2020MS002203.
  • [187] Stephan Rasp and Sebastian Lerch. Neural networks for postprocessing ensemble weather forecasts. Monthly Weather Review, 146(11):3885 – 3900, 01 Nov. 2018.
  • [188] Stephan Rasp, Michael S Pritchard, and Pierre Gentine. Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39):9684–9689, 2018.
  • [189] Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Carvalhais, and Prabhat. Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743):195–204, February 2019. Number: 7743 Publisher: Nature Publishing Group.
  • [190] Lewis Fry Richardson. Weather prediction by numerical process. Cambridge university press, 2007.
  • [191] P.L. Richardson. On the history of meridional overturning circulation schematic diagrams. Progr. Oceanogr., 76, 466e486, 2008.
  • [192] Allan R Robinson, Michael A Spall, Leonard J Walstad, and Wayne G Leslie. Data assimilation and dynamical interpolation in gulfcast experiments. Dynamics of atmospheres and oceans, 13(3-4):301–316, 1989.
  • [193] Dean Roemmich, Gregory C. Johnson, Stephen Riser, Russ Davis, John Gilson, W. Brechner Owens, Silvia L. Garzoli, Claudia Schmid, and Mark Ignaszewski. The Argo Program: Observing the Global Ocean with Profiling Floats. Oceanography, 22(2):34–43, 2009. Publisher: Oceanography Society.
  • [194] Isabella Rosso, Matthew R. Mazloff, Lynne D. Talley, Sarah G. Purkey, Natalie M. Freeman, and Guillaume Maze. Water mass and biogeochemical variability in the kerguelen sector of the southern ocean: A machine learning approach for a mixing hot spot. Journal of Geophysical Research: Oceans, 125(3):e2019JC015877, 2020. e2019JC015877 10.1029/2019JC015877.
  • [195] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206–215, 05 2019.
  • [196] E.A. Ryzhov, D. Kondrashov, N. Agarwal, J.C. McWilliams, and P. Berloff. On data-driven induction of the low-frequency variability in a coarse-resolution ocean model. Ocean Modelling, 153:101664, 2020.
  • [197] Pavel Sakov, F Counillon, L Bertino, KA Lisæter, PR Oke, and A Korablev. Topaz4: an ocean-sea ice data assimilation system for the north atlantic and arctic. Ocean Science, 8(4):633–656, 2012.
  • [198] J.W. Sandström and B. Helland-Hansen. Über die berechnung von meeresströmung. Report on Norwegian fishery and marine investigations, Bergen Grieg, vol. 2, no. 4 = vol. 2, part 2., 1903.
  • [199] Helmut Schiller and Roland Doerffer. Neural network for emulation of an inverse model operational derivation of case ii water properties from meris data. International journal of remote sensing, 20(9):1735–1746, 1999.
  • [200] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
  • [201] Tapio Schneider, João Teixeira, Christopher S Bretherton, Florent Brient, Kyle G Pressel, Christoph Schär, and A Pier Siebesma. Climate goals and computing the future of clouds. Nature Climate Change, 7(1):3–5, 2017.
  • [202] Nina Schuhen, Thordis L. Thorarinsdottir, and Tilmann Gneiting. Ensemble model output statistics for wind vectors. Monthly Weather Review, 140(10):3204 – 3219, 01 Oct. 2012.
  • [203] Simon Shackley, James Risbey, Peter Stone, and Brian Wynne. Adjusting to policy expectations in climate change modeling. Climatic Change, 43(2):413–454, 1999.
  • [204] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
  • [205] Maike Sonnewald, Stephanie Dutkiewicz, Christopher Hill, and Gael Forget. Elucidating ecological complexity: Unsupervised learning determines global marine eco-provinces. Science Advances, 6(22):1–12, 2020.
  • [206] Maike Sonnewald and Redouane Lguensat. Revealing the impact of global heating on north atlantic circulation using transparent machine learning. Earth and Space Science Open Archive, page 27, 2021.
  • [207] Maike Sonnewald, Carl Wunsch, and Patrick Heimbach. Unsupervised learning reveals geography of global ocean dynamical regions. Earth and Space Science, 6(5):784–794, 2019.
  • [208] Hugo Steinhaus. Sur la division des corps matériels en parties. Bull. Acad. Polon. Sci, 1(804):801, 1956.
  • [209] Henry Stommel. The westward intensification of wind-driven ocean currents. Eos, Transactions American Geophysical Union, 29(2):202–206, 1948.
  • [210] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
  • [211] H. U. Sverdrup. Oceanography for Meteorologists. Daya Books, 1942. Google-Books-ID: 3bBlcJfr2ogC.
  • [212] H. U. Sverdrup. Wind-driven currents in a baroclinic ocean; with application to the equatorial currents of the eastern pacific. Proceedings of the National Academy of Sciences, 33(11):318–326, 1947.
  • [213] P. Swapna, R. Krishnan, N. Sandeep, A. G. Prajeesh, D. C. Ayantika, S. Manmeet, and R. Vellore. Long-Term Climate Simulations Using the IITM Earth System Model (IITM-ESMv2) With Focus on the South Asian Monsoon. Journal of Advances in Modeling Earth Systems, 10(5):1127–1149, 2018. _eprint:
  • [214] P Swapna, MK Roxy, K Aparna, K Kulkarni, AG Prajeesh, K Ashok, R Krishnan, S Moorthi, A Kumar, and BN Goswami. The iitm earth system model: Transformation of a seasonal prediction model to a long term climate model. Bulletin of the American Meteorological Society, 2014.
  • [215] Csaba Szepesvári. Algorithms for reinforcement learning. Synthesis lectures on artificial intelligence and machine learning, 4(1):1–103, 2010.
  • [216] Pierre Tandeo, Pierre Ailliot, Marc Bocquet, Alberto Carrassi, Takemasa Miyoshi, Manuel Pulido, and Yicun Zhen. A review of innovation-based methods to jointly estimate model and observation error covariance matrices in ensemble data assimilation. Monthly Weather Review, 148(10):3973 – 3994, 01 Oct. 2020.
  • [217] Jan-Erik Tesdal and Ryan P. Abernathey. Drivers of local ocean heat content variability in eccov4. Journal of Climate, 34(8):2941 – 2956, 01 Apr. 2021.
  • [218] S Thiria, C Mejia, F Badran, and M Crepon. A neural network approach for modeling nonlinear transfer functions: Application for wind retrieval from spaceborne scatterometer data. Journal of Geophysical Research: Oceans, 98(C12):22827–22841, 1993.
  • [219] Leif N Thomas, Amit Tandon, Amala Mahadevan, M Hecht, and H Hasumi. Ocean modeling in an eddying regime. In Geophysical Monograph Series, volume 177, pages 17–38. American Geophysical Union, 2008.
  • [220] O. Tintó Prims, M. C. Acosta, A. M. Moore, M. Castrillo, K. Serradell, A. Cortés, and F. J. Doblas-Reyes. How to use mixed precision in ocean models: exploring a potential reduction of numerical precision in nemo 4.0 and roms 3.6. Geoscientific Model Development, 12(7):3135–3148, 2019.
  • [221] Benjamin A. Toms, Elizabeth A. Barnes, and Imme Ebert-Uphoff. Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability. Journal of Advances in Modeling Earth Systems, 12(9), September 2020.
  • [222] L. Tuppi, P. Ollinaho, M. Ekblom, V. Shemyakin, and H. Järvinen. Necessary conditions for algorithmic tuning of weather prediction models using openifs as an example. Geoscientific Model Development, 13(11):5799–5812, 2020.
  • [223] John Tyndall. Note on the transmission of heat through gaseous bodies. Proceedings Royal Society of London, 10:37–39, 1859.
  • [224] Geoffrey Vallis. Geophysical fluid dynamics: Whence, whither and why? Proceedings. Mathematical, physical, and engineering sciences / the Royal Society, 472:20160140, 08 2016.
  • [225] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
  • [226] Thomas Vandal, Evan Kodra, Jennifer Dy, Sangram Ganguly, Ramakrishna Nemani, and Auroop R. Ganguly.

    Quantifying uncertainty in discrete-continuous and skewed data with bayesian deep learning.

    In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, page 2377–2386, New York, NY, USA, 2018. Association for Computing Machinery.
  • [227] Thomas Vandal, Evan Kodra, and Auroop R. Ganguly. Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theoretical and Applied Climatology, 137(1):557–570, July 2019.
  • [228] GA Vecchi, T Delworth, R Gudgel, S Kapnick, A Rosati, AT Wittenberg, F Zeng, W Anderson, V Balaji, K Dixon, et al. On the seasonal forecasting of regional tropical cyclone activity. Journal of Climate, 27(21):7994–8016, 2014.
  • [229] A Verdy and M R Mazloff. A data assimilating model for estimating Southern Ocean biogeochemistry. JOURNAL OF GEOPHYSICAL RESEARCH-OCEANS, 122(9):6968–6988, 2017.
  • [230] L. Žust, A. Fettich, M. Kristan, and M. Ličer. Hidra 1.0: Deep-learning-based ensemble sea level forecasting in the northern adriatic. Geoscientific Model Development Discussions, 2020:1–25, 2020.
  • [231] Gilbert Walker. World weather. Quarterly Journal of the Royal Meteorological Society, 54(226):79–87, 1928. _eprint:
  • [232] Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. Towards Physics-informed Deep Learning for Turbulent Flow Prediction. arXiv:1911.08655 [physics, stat], June 2020. arXiv: 1911.08655.
  • [233] Peter Watson. Applying machine learning to improve simulations of a chaotic dynamical system using empirical error correction. Journal of Advances in Modeling Earth Systems, May 2019.
  • [234] Jonathan A. Weyn, Dale R. Durran, and Rich Caruana. Improving data-driven global weather prediction using deep convolutional neural networks on a cubed sphere. Journal of Advances in Modeling Earth Systems, 12(9):e2020MS002109, 2020. e2020MS002109 10.1029/2020MS002109.
  • [235] Christopher K Wikle, Ralph F Milliff, Radu Herbei, and William B Leeds. Modern statistical methods in oceanography: A hierarchical perspective. Statistical Science, pages 466–486, 2013.
  • [236] Jared Willard, Xiaowei Jia, Shaoming Xu, Michael Steinbach, and Vipin Kumar. Integrating Physics-Based Modeling with Machine Learning: A Survey. arXiv:2003.04919 [physics, stat], July 2020. arXiv: 2003.04919.
  • [237] Christopher KI Williams and Carl Edward Rasmussen. Gaussian processes for regression, 1996.
  • [238] A. W. Wood, L. R. Leung, V. Sridhar, and D. P. Lettenmaier. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs. Climatic Change, 62(1):189–216, January 2004.
  • [239] Carl Wunsch. Ocean observations and the climate forecast problem. International Geophysics, 83:233–245, 12 2002.
  • [240] Carl Wunsch. Towards the world ocean circulation experiment and a bit of aftermath. Physical Oceanography: Developments Since 1950, 09 2005.
  • [241] Janni Yuval, Paul A. O’Gorman, and Chris N. Hill. Use of Neural Networks for Stable, Accurate and Physically Consistent Parameterization of Subgrid Atmospheric Processes With Good Performance at Reduced Precision. Geophysical Research Letters, 48(6):e2020GL091363, 2021. _eprint:
  • [242] Laure Zanna and Thomas Bolton. Data-driven equation discovery of ocean mesoscale closures. Geophysical Research Letters, 47(17):e2020GL088376, 2020. e2020GL088376 10.1029/2020GL088376.
  • [243] Laure Zanna and Thomas Bolton. Data‐driven Equation Discovery of Ocean Mesoscale Closures. Geophysical Research Letters, pages 1–13, 2020.
  • [244] Stephen E. Zebiak and Mark A. Cane. A Model El Ni&ntilde–Southern Oscillation. Monthly Weather Review, 115(10):2262–2278, October 1987. Publisher: American Meteorological Society Section: Monthly Weather Review.

Appendix A List of acronyms

Abbreviation Description
4DVar 4-dimensional variational assimilation
AE AutoEncoder
AI Artificial Intelligence
AIC Akaike information criterion
BIC Bayesian information criterion
ConvNet Convolutional Neural Network
DBSCAN Density-Based Spatial Clustering of Applications with Noise
DA Data Assimilation
DL Deep Learning
DNN Deep Neural Network
ECCO Estimating the Circulation and Climate of the Ocean
EnKF Ensemble Kalman filter
EOF Empirical Orthogonal Functions
GAN Generative Adversarial Network
GFD Geophysical Fluid Dynamics
GMM Gaussian Mixture Model
GODAE Global Ocean Data Assimilation Experiment
GODAS Global Ocean Data Assimilation System
GPR Gaussian Process Regression
GPU Graphical Processing Units (GPU)
HPC High Performance Computing
IAI Interpretable Artificial Intelligence
KNN K Nearest Neighbors
LR Linear Regression
MAE Mean Absolute Error
ML Machine Learning

Multi-Layer Perceptron

MSE Mean Square Error
NESYM Neural Earth System Models
NN Neural Networks
PCA Principal Component Analysis
PINN Physics Informed Neural Networks
RF Random Forest
RL Reinforcement Learning
RNN Recurrent Neural Network
RVM Relevance Vector Machines
SGD Stochastic Gradient Descent
SOM Self Organizing Maps
SVM Support Vector Machines
SVR Support Vector Regression
t-SNE t-distributed Stochastic Neighbor Embedding
UMAP Uniform Manifold Approximation and Projection
VAE Variational Autoencoder
XAI Explainable Artificial Intelligence
WOCE World Ocean Circulation Experiment
WWII World War two