Covid_Control
Machine learning to predict future number Covid19 Daily Cases (7day moving average). Long Short Term Memory (LSTM) Predictor and Reinforcement Learning (RL) Prescription with Oxford Dataset
view repo
Several models have been developed to predict how the COVID19 pandemic spreads, and how it could be contained with nonpharmaceutical interventions (NPIs) such as social distancing restrictions and school and business closures. This paper demonstrates how evolutionary AI could be used to facilitate the next step, i.e. determining most effective intervention strategies automatically. Through evolutionary surrogateassisted prescription (ESP), it is possible to generate a large number of candidate strategies and evaluate them with predictive models. In principle, strategies can be customized for different countries and locales, and balance the need to contain the pandemic and the need to minimize their economic impact. While still limited by available data, early experiments suggest that workplace and school restrictions are the most important and need to be designed carefully. As more data becomes available, the approach can be increasingly useful in dealing with COVID19 as well as possible future pandemics.
READ FULL TEXT VIEW PDFMachine learning to predict future number Covid19 Daily Cases (7day moving average). Long Short Term Memory (LSTM) Predictor and Reinforcement Learning (RL) Prescription with Oxford Dataset
None
None
Mini Hackaton 21/11/2020
None
The COVID19 crisis is unprecedented in modern times, and caught the world largely unprepared. Since there is little experience and guidance, authorities have been responding in a variety of ways. Many different nonpharmaceutical interventions (NPIs) have been implemented at different stages of the pandemic and in different contexts. On the other hand, compared to past pandemics, for the first time almost realtime data is collected about these interventions, their economic impact, and the spread of the disease. These two factors create an excellent opportunity for computational modeling and machine learning.
Most of the modeling efforts so far have been based on traditional epidemiological methods, such as compartmental models [4]. Such models can be used to predict the spread of the disease, assuming a few parameters such as the basic reproduction number
can be estimated accurately. New ideas have also emerged, including using cellphone data to measure social distancing
[66]. These models have been extended with NPIs by modifying the transmission rates: each NPI is assumed to reduce the transmission rate by a certain amount [60, 27, 15]. Such models have received a lot of attention: in this unprecedented situation, they are our only source of support for making informed decisions on how to reduce and contain the spread of the disease.However, epidemiological models are far from perfect. Much about how the disease is transmitted, how prevalent it is in the population, how many people are immune and the strength of the immunity, is unknown, and it is difficult to parameterize the models accurately. Similarly, the effects of NPIs are unpredictable in that their effect varies based on the cultural and economic environment, the stage of the pandemic, and above all, they interact in nonlinear ways. To overcome the uncertainty, data is crucial. Instead of estimating parameters, it is possible to fit the models to the data so that they predict already existing data more accurately. In the extreme, with enough data, it is possible to use machine learning simply to model the data. All the unknown epidemiological, cultural, and economic parameters and interactions are expressed in the time series of infections and NPIs. The fact that the epidemic spreads to different regions at a lag, and different geographies experience different stages of the spread at different times, and often react to it differently, gives us the opportunity to ’frontrun’ the epidemic with datadriven models. Machine learning can then be used to construct a model, such as a recurrent neural network, that accurately predicts the outcomes without having to understand precisely how they they emerge.
The datadriven modeling approach is implemented and evaluated in this paper. An LSTM neural network model [24, 19] is trained with publicly available data on infections and NPIs [22] in a number of countries and applied to predicting how the pandemic will unfold in them in the future. The predictions are cascaded one day at a time and constrained to a meaningful range. Even with current limited data, the predictions are surprisingly accurate and wellbehaved. This result suggests that the datadriven machine learning approach is potentially a powerful new tool for epidemiological modeling. This is the first main contribution of the paper.
A more significant contribution, however, is to demonstrate that machine learning can also be used to take the next step, i.e. to extend the models from prediction to prescription. That is, given that we can predict how the NPIs affect the pandemic, we can also automatically discover effective NPI strategies. The technology required for this step is different from standard machine learning. The goal is not to model and predict processes for which data already exists, but to create new solutions that may have never existed before. In other words, it requires extending AI from imitation to creativity.
This extension is indeed underway in AI through several approaches such as reinforcement learning, Bayesian parameter optimization, gradientbased approaches, and evolutionary computation
[46, 12, 39]. The approach taken in this paper is based on evolutionary surrogateassisted prescription [16], a technique that combines evolutionary search with surrogate modeling (Figure 1).In ESP, a predictive model is first formed through standard machine learning techniques, such as neural networks. Given actions taken in a given context (such as NPIs at a given stage of the pandemic), it predicts what the outcomes would be (such as infections, deaths, and economic cost). A prescriptive model, another neural network, is then formed to implement an optimal decision strategy, i.e. what actions should be taken in each context. Since optimal actions are not known, the Prescriptor cannot be trained with standard supervised learning. However, it can be evolved, i.e. discovered through populationbased search. Because it is often impossible or prohibitively costly to evaluate each candidate strategy in the real world, the Predictor model is used as a surrogate. In this manner, millions of candidate strategies can be generated and tested in order to find ones that optimize the desired outcomes.
ESP has been used in several realworld design and decision optimization problems, including discovering growth recipes for agriculture [23], and designs for ecommerce websites [44]. It often discovers effective solutions that are overlooked by human designers. Recently it has also been applied to sequential decisionmaking tasks, and found to be more sampleefficient, reliable, and safe than standard reinforcement learning techniques [16]. Much of this performance is due to a surprising regularization effect that incompletely trained Predictors and Prescriptors bring about.
The ESP approach is applied in this paper to the problem of determining optimal NPIs for the COVID19 pandemic. Using the datadriven LSTM model as the Predictor, a Prescriptor is evolved in a multiobjective setting to minimize the number of COVID19 cases, as well as the number and stringency of NPIs (representing economic impact). In this process, evolution discovers a Pareto front of Prescriptors that represent different tradeoffs between these two objectives: Some utilize many NPIs to bring down the number of cases, and others minimize the number of NPIs with a cost of more cases. Therefore, the AI system is not designed to replace human decision makers, but instead to empower them: Humans choose which tradeoffs are the best, and the AI makes suggestions on how they can be achieved. It therefore constitutes a step towards using AI not just to model the pandemic, but to help contain it—which has so far been missing from the literature [8].
The current implementation should be taken as a proof of concept i.e. a demonstration of the potential power of the approach. The currently available data is still limited in quantity, accuracy, and detail. It is not yet possible to draw specific prescriptions reliably, such that e.g. in a particular country, a particular NPI can be safely lifted on a particular date. The results so far suggest that such prescriptions may become possible in the next few months, as the quality and quantity of the data improves. The experiments already point to two general conclusions. First, school and workplace closings turn out to be the two most important NPIs in the simulations: they have the largest and most reliable effects on the number of cases compared to e.g. restrictions on gatherings and travel. Second, partial or alternating NPIs may be effective. Prescriptors repeatedly turn certain NPIs on and off over time, for example, schools opening and closing on a weekly basis seems to imply the need for restricting schools to be opened fewer days per week. This is a creative and surprising solution, given the limited variability of NPIs that is available to the Prescriptors. Together these conclusions already suggest a possible focus for efforts on establishing and lifting NPIs, in order to achieve maximum effect and minimal cost.
The paper begins with a background of epidemiological modeling and the general ESP approach. The datasets used, the datadriven predictive model developed, and the evolutionary optimization of the Prescriptor will then be described. Illustrative results will be reviewed in several cases, drawing general conclusions about the potential power of the approach. Future work on utilizing epidemiological models, supporting interactive exploration, modeling uncertainty, and generating explainable prescriptions will be discussed.
An interactive demo, allowing users to explore the Pareto front of Prescriptors on multiple countries, is available at http://evolution.ml/esp/npi.
Different types of epidemiological models are briefly characterized, followed by a review of existing COVID19 modeling efforts, and the emerging opportunity for machine learning models.
Modern epidemic modeling started with the compartmental SIR model developed by McKendrick and Kermack at the beginning of the 20th century [34]. The SIR model assumes susceptible individuals (S) can get infected (I) and, after a certain infectious period, die or recover (R), becoming immune afterwards. The model then describes global transmission dynamics at a population scale as a system of differential equations in continuous time. Depending on disease characteristics, these compartments and the flow patterns between them can be further refined: For instance in the case of HIV, mixing and contact depends on age groups [43]. The main limitation of such metapopulation models is that random mixing is limited between individuals within population subgroups, i.e. compartments.
Newman [50] showed that contact patterns can be represented more accurately through a network topology, taking into account geography, demographics, and social factors, and thus overcoming limitations of compartmental models. Keeling and Eames [33] further highlighted the stochastic nature of transmissions by demonstrating SIR epidemics on five types of social and spatial networks for a population of 10,000 individuals. Later studies focused on evolutionary and adaptive networks [20], aiming to model the dynamics of social links, such as frequency, intensity, locality, and duration of contacts, which strongly influence the long term impacts of epidemics.
Indeed, it is now widely recognized that multiple factors at different levels influence how epidemics spread [62]. Models have become more detailed and sophisticated, relying on extensive computational power now available to simulate them. In addition to compartmental and network models, agentbased simulations have emerged as a third simulation paradigm. Agentbased approaches describe the overall dynamics of infection as result of events and activities involving single individuals [65], resulting in potentially detailed but computationally demanding processes. For a more detailed review of the literature on epidemiological models and their mathematical insights see [3, 42].
A variety of epidemiological simulations are currently used to model the COVID19 pandemic. The focus is on simulating effects of NPIs in order to support decision making about response policies.
For instance, Stanford University [60] extended the SIR model up to nine compartments including susceptible, exposed, asymptomatic, presymptomatic, mild and severe symptomatic, hospitalized, deceased, and recovered populations, as well as a stochastic simulator for transitions between compartments, calibrated with MIDAS data [47]
. Currently, the simulator supports up to three interventions at different times. In contrast, NPIs are implemented in a more granular fashion in the Bayesian inference model of Imperial College London
[15]. Their model parameters are estimated empirically from ECDC data [13] for 11 countries. Deaths are then predicted as a function of infections based on the distribution of average time between infections and timevarying reproduction number .The Institute for Health Metrics and Evaluation (IHME) of Washington University [27] combined compartmental models and mixedeffects nonlinear regression to predict cumulative and daily death rate as a function of NPIs. They also forecast health service needs using a microsimulation model. The IHME dashboard [64] compares estimated and confirmed daily infections upon testing rate for both U.S. and European countries up to state and regional levels. Similarly, the University of Texas [66] developed a statistical model based on nonlinear Bayesian hierarchical regression with a negativebinomial model for daily variation in death rates. The novelty is to estimate social distancing using geolocation data from mobile phones, improving accuracy over the IHME model. So far, the forecasts rely on U.S. data and are not reliable beyond about weeks.
More broadly, the Centers for Disease Control and Prevention (CDC) [4] provides details about the different COVID19 prediction models and their specific NPI assumptions. In fact, CDC works with partners to bring together forecasts for cumulative deaths over the following four weeks. Forecasting teams predict numbers of deaths using different types of data (e.g. WHO, COVID19, demographic, mobility), methods, and estimates of the impacts of NPIs (e.g. social distancing and use of face coverings). In general, most of COVID19 forecast approaches use curvefitting and ensembles of mechanistic SIR models with age structures and different parameter assumptions. Social distancing and NPIs are usually not represented directly, but approximated as changes in transmission rates. The main advantage is that running simulations require a few input parameters based on average data at population scale.
In contrast, since contact dynamics in agentbased and network approaches results from events and activities of single individuals and their locations, they can be more accurate in modeling social distancing and NPIs. However, their parameters need to be calibrated appropriately, which is difficult to do with available data. Mechanistic transmission models can help overcome data collection challenges when tracing a real network of individuals. Different tracing techniques, such as comprehensive diarybased studies (e.g. POLYMOD [48]), and recorded movements of individuals (e.g. transportation networks [26] and dollar bill tacking [2]) have been investigated in the past to sample real networks with limited resources and data availability. In the context of COVID19, as mentioned above, a new opportunity is to use mobile phone data to support implementation of social distancing measures (e.g. [66]). Debates on value and ethics of tracking people movements to monitor the COVID19 are still ongoing [11]. Another challenge for current epidemiological modeling methods is that they require significant computing resources and sophisticated parallelization algorithms. As a step to making them feasible, EpiFast [1] reduces the SIR epidemics simulation problem to a sequence of graph operations on distributed memory systems, and thereby significantly decreases the simulation cost. The COVID19 pandemic has accelerated efforts to develop solutions to these challenges, and is likely to result in improved models in the future.
Any of the above models that include NPI effects and generate longterm predictions could be used as the Predictor with ESP, even several of them together as an ensemble. Taking advantage of them is indeed a compelling direction of future work (Section 8). However, this paper focuses on evaluating a new opportunity: Building the model solely based on past data using machine learning. Given that data about the COVID19 pandemic is generated, recorded, and made available more than any epidemic before, such a new approach may be feasible for the first time.
There is a lot of promise in this datadriven approach. The epidemiological models require several assumptions about the population, culture, and environment, depend on several parameters that are difficult to set accurately, and cannot take into account many possible nonlinear and dynamic interactions between the NPIs, and in the population. In contrast, all such complexities are implicitly included in the data. The datadriven models are phenomenological, i.e. they do not explain how the given outcomes are produced, but given enough data, they can be accurate in predicting them. This is the first hypotheses tested in this paper; as shown in Section 5, it turns out that even with the limited data available at this point, datadriven models can be useful.
All the models discussed so far are predictive: Based on knowledge of the populations and the epidemic, and the data so far about its progress in different populations and efforts to contain it, they estimate how the disease will progress in the future. By themselves, these models do not make recommendations, or prescriptions, of what NPIs would be most effective. It is possible to manually set up hypothetical future NPI strategies and use the models to evaluate how well they would work. However, only a few strategies can be tested in this manner, and the process is limited by the ability of human experts to think of promising strategies. Given past experience with surrogate modeling and populationbased search, automated methods may be more effective in this process. A method for doing so, ESP, will be described next.
ESP is a continuous blackbox optimization process for adaptive decisionmaking [16]. In ESP, a model of the problem domain is used as a surrogate for the problem itself. This model, called the Predictor (), takes a decision as its input, and predicts the outcomes of that decision. A decision consists of a context (i.e. a problem) and actions to be taken in that context.
A second model, called the Prescriptor (), is then created. It takes a context as its input, and outputs actions that would optimize outcomes in that context. In order to develop the Prescriptor, the Predictor is used as the surrogate, i.e. a less costly and risky alternative to the real world.
More formally, given a set of possible contexts and possible actions , a decision policy returns a set of actions to be performed in each context :
(1) 
where and . For each such pair there is a set of outcomes , and the Predictor is defined as
(2) 
and the Prescriptor implements the decision policy as
(3) 
such that over all possible contexts and outcome dimensions is maximized (assuming they improve with increase). It thus approximates the optimal decision policy for the problem. Note that the optimal actions are not known, and must therefore be found through search.
In the case of the NPI optimization problem, context consists of information regarding a region. This might include data on the number of available ICU beds, population distribution, time since the first case of the disease, current COVID19 cases, and fatality rate. Actions in this case specify whether or not the different possible NPIs are implemented within that region. The outcomes for each decision measure the number of cases and fatalities within two weeks of the decision, and the cost of each NPI.
The ESP algorithm then operates as an outer loop that constructs the Predictor and Prescriptor models (Figure 2):
Train a Predictor based on historical training data;
Evolve Prescriptors with the Predictor as the surrogate;
Apply the best Prescriptor in the real world;
Collect the new data and add to the training set;
Repeat.
In the case of NPI optimization, there is currently no Step 3 since the system is not yet incorporated into decision making. However, any NPIs implemented in the real world, whether similar or dissimilar to ESP’s prescriptions, will similarly result in new training data. As usual in evolutionary search, the process terminates when a satisfactory level of outcomes has been reached, or no more progress can be made, or the system iterates indefinitely, continuously adapting to changes in the real world (e.g., adapting to the advent of vaccines or antiviral drugs).
The Predictor model is built by modeling a dataset. The choice of algorithm depends on the domain, i.e. how much data there is, whether it is continuous or discrete, structured or unstructured. Random forests, symbolic regression, and neural networks have been used successfully in this role in the past[16, 23]. In some cases, such as NPI optimization, an ensemble of datadriven and simulation models may be useful, in order to capture expected or finegrained behavior that might not yet have been reflected in the data (Section 8).
The Prescriptor model is built using neuroevolution: Neural networks because they can express complex nonlinear mappings naturally, and evolution because it is an efficient way to discover such mappings [61], and naturally suited to optimize multiple objectives [7, 10]. Because it is evolved with the Predictor, the Prescriptor is not restricted by a finite training dataset, or limited opportunities to evaluate in the real world. Instead, the Predictor serves as a fitness function, and it can be queried frequently and efficiently. In a multiobjective setting, ESP produces multiple Prescriptors, selected from the Pareto front of the multiobjective neuroevolution run. The Prescriptor is the novel aspect of ESP: it makes it possible to discover effective solutions that do not already exist, even those that might be overlooked by human decision makers.
In past work, ESP was found to be effective in standard reinforcement learning benchmarks [16], as well as in realworld applications of discovering recipes for growing plants in controlled environments [23] and in designing ecommerce websites [44]. The benchmarks were useful because ESP could be evaluated in them as an autonomous decision making system. However, in realworld applications, including NPI optimization, it is most naturally used as a system for augmenting human decision making. To support this role, it is possible to include a ’scratchpad’ functionality, whereby the human decision maker can see the predicted outcomes of the ESPprescribed actions, as well as modify the prescriptions and weigh the outcomes as part of their decision making. A scratchpad will be an integral part of the NPI interactive tool as well (Section 8).
Another helpful extension of ESP is to automatically estimate the confidence in the predicted outcomes. With neural networks, softmax of the output is often used as an estimation of confidence, but this measure is often inaccurate [17]. A better approach is to build a certainty estimation model to complement the Predictor through a Gaussian process of training data residuals using input and output kernels [57]. This approach will be described in more detail in Section 5.3.
In the NPI optimization task, ESP is built to prescribe the NPIs for the current day such that the number of cases and cost that would result in the next two weeks is optimized. The details of the Predictor and Prescriptor in this setup will be described next, after an overview of the data used to construct them.
Even though COVID19 is not the first global pandemic, it is the first that is recorded in significant detail, providing data that is made publicly available daily as the pandemic unfolds.
The earliest data that became available included the number of confirmed cases, the number of deaths, and the number of recovered patients, per country, region, and day. A well known such source is the Johns Hopkins University (JHU [29]), updated daily and quoted widely in the press. Several predictive models have built based on this data, and there is even a Kaggle competition to predict daily confirmed cases using it [30]. The Kaggle site encourages companies and organizations to contribute other useful datasets. Other data is available about e.g. the population and medical system in each country, but they have not yet been used to inform the models.
Once there was enough data to model how the pandemic spreads, an important question started to emerge: what can we do to contain it? Pharmaceutical interventions such as treatments and vaccines will take time to develop, so the focus has been on implementing nonpharmaceutical interventions, i.e. NPIs. The goal is to ”flatten the curve,” i.e. limit the spread, gain time, and prevent hospitals from being overwhelmed until a vaccine can be developed [14, 15]. Augmenting the health data with data on NPIs, it would be possible to learn how the NPIs affect health outcomes. For instance, since the pandemic affected China and other parts of Asia before the rest of the world, it would be possible to learn from their examples.
However, compiling NPI data turned out to be more difficult, and took longer. Each country took different actions, at different levels, in different cities or regions. These decision were reported in the press, but initially were not aggregated and normalized, making it hard to form a dataset. For instance in the US, such datasets started to come out only in April [31, 35, 63]. Based on data from the Covid Tracking Project [63], University of Washington’s Institute for Health Metrics and Evaluation (IHME) developed a dashboard that shows the NPI timeline [28]. Updated continuously, the dashboard shows a projection for daily deaths and estimated infections (estimated infections are higher than confirmed infections due to limited testing). The model includes a social distancing factor computed from the NPIs, held constant for the future. At a more global scale, Oxford University’s Blavatnik School of Government provides a dataset of the number of cases, deaths and NPIs for most countries on a daily basis [22, 52, 56]. It initially included six ’Closure and Containment’ NPIs, but on April 29 was extended to eight NPIs with more granular intervention levels. A detailed explanation of the data is provided in a codebook [51].
NPI name  Level 0  Level 1  Level 2  Level 3  Level 4  
C1_School closing  no measures  recommend closing 

require closing all levels  
C2_Workplace closing  no measures 




C3_Cancel public events  no measures  recommend cancelling  require cancelling  
C4_Restrictions on gatherings  no restrictions 





C5_Close public transport  no measures 



C6_Stay at home requirements  no measures  recommend not leaving house 




no measures 



C8_International travel controls  no restrictions  screening arrivals 



The Oxford dataset was used as a source in the current ESP study. The models were trained using the ’ConfirmedCases’ data for the cases and ’Closure and Containment’ data for the NPIs. The other NPI categories in the dataset, i.e. ’Economic response’, ’Public Heath’ and ’Miscellaneous’ measures because they have less direct impact on the spread of the epidemic. A summary of these NPIs is given in Table 1.
The number of cases was selected as the target for the predictions (instead number of deaths, which is generally believed to be more reliable), because case numbers are higher and the data is smoother overall. The model also utilizes a full 21day case history which it can use to uncover structural regularities in the case data. For instance, it discovers that many fewer cases are reported on the weekends in France and Spain. However, the data is still noisy for several reasons:
There are other differences in how cases are reported in each country;
Some countries, like the US, do not have a uniform manner of reporting the cases;
Cases were simply not detected for a while, and testing policy still widely differs from country to country.
Some countries, like China, US, and Italy, implemented NPIs at a state / regional level, and it is difficult to express them at the country level;
As usual with datasets, there are mistakes, missing days, doublecounted days, etc.
It is also important to note that there is roughly a twoweek delay between the time a person is infected and the time the case is detected. A similar delay can therefore be expected between the time an NPI is put in places and its effect on the number of cases.
Despite these challenges, it is possible to use the data to train a useful model to predict future cases. This datadriven machine learning approach will be described next.
With the above data sources, machine learning techniques can be used to build a predictive model. Good use of recent deep learning approaches to sequence processing can be made in this process, in particular recurrent neural networks. However, a method of cascading the predictions needs to be developed so that they can reach several steps into the future. Furthermore, methods are needed that keep the predictions within a sensible range even with limited data.
This section describes the stepbystep design of the learned predictor. For a given country, let be the number of new cases on day . The goal is to predict in the future. First, consider the minimal epidemic model
(4) 
where the factor is to be predicted. Focusing on such factors is fundamental to epidemiological models, and, when learning a predictive model from data, makes it possible to normalize prediction targets across countries and across time, thereby simplifying the learning task.
Training targets can be constructed directly from daily case data for each country. However, in many countries case reporting is noisy and unreliable, leading to unreasonably high noise in daily . This effect can be mitigated by instead forming smoothed targets based on a moving average of new cases:
(5) 
In this paper, for all models, i.e. prediction targets are smoothed over the preceding week.
To capture the effects of finite population size and immunity, an additional factor is included that scales predictions by the proportion of the population that could possibly become new cases:
(6) 
where is the population size, and is the total number of recorded cases by day . Notice that, when evaluating a trained model, the predicted can be recovered from a predicted by
(7) 
Note that this formulation assumes that recovered cases are fully immune: When , the number of new cases must goes to 0. This assumption can be relaxed in the future by adding a factor to Equation 6 (either taken from the literature or learned) to represent people who were infected and are no longer immune.
The trainable function implementing can now be described. The prediction should be a function of (1) NPIs enacted over previous days, and (2) the underlying state of the pandemic distinct from the enacted NPIs. For the models in this paper, (1) is represented by the NPI restrictiveness values for the past days over all available NPIs, and (2) is represented autoregressively by the previous values of (or, during forecasting, by the predicted when the true is unavailable). Formally,
(8) 
In contrast to epidemiological models that make predictions based on today’s state only, this datadriven model predicts based on data from the preceding three weeks.
To help the model generalize with a relatively small amount of training data, the model is made more tractable by decomposing with respect to its inputs:
(9) 
Here, the factor can be viewed as the effect of social distancing (i.e. NPIs), and as the endogenous growth rate of the disease.
To make effective use of the nonlinear and temporal aspects of the data, both and are implemented as LSTM models [24], each with a single LSTM layer of 32 units, followed by a dense layer with a single output. To satisfy their output bounds, the dense layers of and are followed by sigmoid and softplus activation, respectively.
Importantly, the factorization of into and makes it possible to explicitly incorporate the constraint that increasing the stringency of NPIs cannot decrease their effectiveness. This idea is incorporated by constraining to be monotonic with respect to each NPI, i.e.
(10) 
This constraint is enforced by requiring all trainable parameters of to be nonnegative, except for the single bias parameter in its dense layer. This nonnegativity is implemented by setting all trainable parameters to their absolute value after each parameter update.
Note that although the model is trained only to predict one day in the future, it can make predictions arbitrarily far into the future given a schedule of NPIs by autoregressively feeding the predicted back into the model as input.
For the experiments in this paper, the model for
was implemented in Keras
[5]. The Keras diagram of the model is shown in Figure 3.The model is trained endtoend to minimize mean absolute error (MAE) with respect to targets using the Adam optimizer [37] with default parameters and batch size 32. MAE was used instead of mean squared error (MSE) because it is more robust to remaining structural noise in the training data. The last 14 days of data were withheld from the dataset for testing. For the remaining data, the were clipped to the range
to handle extreme outliers, and randomly split into 90% for training and 10% for validation during training. The model was trained until validation MAE did not improve for 20 epochs, at which point the weights yielding the best validation MAE were restored. Since the model and dataset are relatively small compared to common deep learning datasets, the model is relatively inexpensive to train. On a 2018 MacBook Pro Laptop with six Intel i7 cores, the model takes
seconds to train (mean and std. err. computed over 10 independent training runs).To validate the factored monotonic LSTM (NPILSTM) predictor design described above, it was compared to a suite of baseline machine learning regression models. These baselines included linear regression, random forest regression (RF), support vector regression (SVR) with an RBF kernel, and feedforward neural network regression (MLP). Each baseline was implemented with scikit learn, using their default parameters
[54]. Each method was trained independently 10 times on the training dataset described in Section 5.1. The results on the test dataset (last days of the countries with the most cases) were evaluated with respect to four complementary performance metrics. In particular, for the comparisons in this section, training data consisted of data up until May 6, 2020, and test data consisted of data from May 7 through May 20, 2020.Suppose training data ends on day . Let and be the model output and the corresponding predicted new cases (recovered via Equation 7) for the th country at day . The metrics were:
This metric is simply the loss the models were explicitly trained to minimize, i.e. minimize given the ground truth for the previous 21 days:
(11) 
The remaining three metrics are based not on singlestep prediction, but the complete 14 day forecast for each country:
This is the most intuitive metric, included as an interpretable reference point. It is simply the MAE w.r.t. new cases over the 14 test days summed over all 20 test countries:
(12) 
This metric normalizes the case MAE of each country by the number of true cases in the 14 day window, so that errors are in a similar range across countries. Such normalization is important for aggregating results over countries that have different population sizes, or are in different stages of the pandemic:
(13) 
This metric ranks the methods in terms of case error for each country, and then averages over countries. It indicates how often a method will be preferred over others on a countrybycountry basis:
(14) 
where returns the rank of the error across all five methods, i.e. the method with the lowest error receives rank of 0, the nextbest method receives rank of 1, and so on.
Of these four metrics, Normalized Case MAE gives the most complete picture of how well a method is doing, since it combines detailed case information of Raw Case MAE with fairness across countries similar to Mean Rank. The results are shown in Table 2.
Method  Norm. Case MAE  Raw Case MAE  Mean Rank  1step MAE 

MLP  2.47  1089126  3.19  0.769 
RF  0.95  221308  1.98  0.512 
SVR  0.71  280731  1.76  0.520 
Linear  0.64  176070  1.63  0.902 
NPILSTM  0.42  154194  1.46  0.510 
with mean and standard error over 10 trials. Interestingly, although RF and SVR do quite well in terms of the loss they were trained on (1step
MAE), the simple linear model outperforms them substantially on the metrics that require forecasting beyond a single day, showing the difficulty that offtheshelf nonlinear methods have in handling such forecasting. In contrast, with the extensions developed specifically for the epidemic modeling case, the NPILSTM methods outperforms the baselines on all metrics.NPILSTM outperforms the baselines on all metrics. Interestingly, although RF and SVR do quite well in terms of the loss on which they were trained (1step MAE), the simple linear model outperforms them substantially on the metrics that require forecasting beyond a single day, showing the difficulty that offtheshelf nonlinear methods have in handling such forecasting.
To verify that the predictions are meaningful and accurate, four example scenarios, i.e. four different countries at different stages of the pandemic, are plotted in Figure 4 (active cases at each day is approximated as the sum of new cases over the prior 14 days). Day 0 represents the point in time when 10 total cases were diagnosed; in each case, stringent NPIs were enacted soon after. The predictor was trained on data up until April 17, 2020, and the predictions started on April 18, with 21 days of data before the start day given to the predictor as initial input. Assuming the NPIs in effect on the start day will remain unchanged, it will then predict the number of cases 180 days into the future. Importantly, during the first 14 days its predictions can be compared to the actual number of cases. For comparison, another prediction plot is generated from the same start date assuming no NPIs from that date on. As can be seen from the figure, (1) the predictions match the actual progress well, (2) assuming the current stringent NPIs continue, the cases will eventually go to 0, and (3) with no NPIs, there is a large increase of cases, followed by an eventual decrease as the population becomes immune. The predictions thus follow meaningful trajectories.
The main conclusion from these experiments is that the datadriven approach works surprisingly well, even with limited data. As will be discussed in Section 8, it should be possible to improve it in the future, when more and better data becomes available. It might also be possible to combine the strengths of the different approaches to prediction and epidemiological modeling. In any case, the current predictive model already makes it possible to build Prescriptors for ESP, as will be discussed in Section 6. The way confidence can be estimated in it will be described next.
An important aspect of any decision system is to estimate confidence in its outcomes. In prescribing NPIs, this means estimating uncertainty in the Predictor, i.e. deriving confidence intervals on the predicted number of future cases. In simulation models such as those reviewed in Section
2, variation is usually created by running the models multiple times with slightly different initial conditions or parameter values, and measuring the resulting variance in the predictions. With neural network predictors, it is possible to measure uncertainty more directly by combining a Bayesian model with it
[49, 18, 36]. Such extended models tend to be less accurate than pure predictive models, and also harder to set up and train [17, 38].A recent alternative is to train a separate model to estimate uncertainty in pointprediction models [57]. In this approach, called RIO, a Gaussian Process is fit to the original residual errors in the training set. The I/O kernel of RIO utilizes both input and output of the original model so that information can be used where it is most reliable. In several benchmarks, RIO has been shown to construct reliable confidence intervals. Surprisingly, it can then be used to improve the point predictions of the original model, by correcting them towards the estimated mean. RIO can be directly applied to any machine learning model without modifications or retraining. It therefore forms a good basis for estimating uncertainty in the COVID19 Predictor as well.
In order to extend RIO to timeseries predictions, the hidden states of the two LSTM models (before the lambda layer in Figure 3
) are concatenated and fed into the input kernel of RIO. The original predictions of the predictor are used by the output kernel. RIO is then trained to fit the residuals of the original predictions. During deployment, the trained RIO model then provides a Gaussian distribution for the calibrated predictions. The details of this process are presented in Algorithm
1.To validate this process empirically, RIO was trained and tested on COVID19 data from 21 selected countries. The data was preprocessed in four steps: (1) Within the 30 most affected countries in terms of total confirmed cases, select the countries on which the original predictor gives MAE less than 0.07.( 2) Remove the outlier days that have an larger than 2.0. (3) Remove the data of the earliest 10 days (after the first 21 days), so that the training data are closer to the recent situations. (4) For each country, randomly select 14 days as the testing data. The hyperparameters in these steps were found to be appropriate empirically. All remaining days were then used as the training data. Table 3 shows the results of applying RIO in this manner to the original predictor. The conclusion is that RIO constructs reasonable confidence intervals at several confidence levels, and slightly improves the prediction accuracy. It can therefore be expected to work well in estimating confidence in the NPI presription outcomes as well.
Dataset  original MAE  MAE with RIO  95% CI Coverage  90% CI Coverage  68% CI Coverage 

Training dataset  0.0379  0.0342  0.932  0.909  0.761 
Testing dataset  0.0471  0.0467  0.884  0.847  0.687 
CI coverage means the percentage of testing outcomes that are within the estimated confidence intervals (CIs).
However, RIO will first need to be extended to model uncertainty in time series. Because NPILSTM forecasts are highly nonlinear and autoregressive, analytic methods are intractable. Instead, given that the predictor model with RIO returns both the mean and standard deviation for
, the standard deviation after days in the future can be estimated via Monte Carlo rollouts. Specifically, for each step in each rollout, instead of predicting and feeding it back into the model to predict the next step, is sampled from the Gaussian distribution returned by RIO, and this sample is fed back into the model. Thus, after steps, a sample is generated from the forecast distribution. Given several such samples, standard deviations are computed empirically for all forecasted days . Standard deviations are computed empirically over many samples (100 in the experiments in this paper).Thus, RIO makes it possible to estimate uncertainty in the predictions, which in turn helps the decision maker interpret and trust the results, i.e. how reliable the outcomes are for the recommendations that the Prescriptors generate. The method for discovering good Prescriptors will be described next.
Whereas many different models could be used as a Predictor, the Prescriptor is the heart of the ESP approach, and needs to be constructed using modern search techniques. This section describes the process of evolving neural networks for this task. A number of example strategies are presented from the Pareto front, representing tradeoffs between objectives, as well as examples for countries at different stages of the pandemic, and counterfactual examples comparing possible vs. actual outcomes. General conclusions are drawn on which NPIs matter the most, and how they could be implemented most effectively.
Any of the existing neuroevolution methods [61] could be used to construct the Prescriptor as long as it evolves the entire network including all of its weight parameters: Neural architecture search cannot be used easily since there are no targets (i.e. known optimal NPIs) with which to train it with gradient descent. The most straightforward approach of evolving a vector of weights for a fixed topology is therefore used and found to be sufficient in this case. The Prescriptor model (Figure 5) is a neural network with one input layer of size 21, corresponding to case information (as defined in Equation 6
) for the prior 21 days. This input is the same as the context_input of the Predictor. The input layer is followed by a fullyconnected hidden layer of size 32 with the tanh activation function, and eight outputs (of size one) with the sigmoid activation function. The outputs represent the eight possible NPIs, which will then be input to the Predictor. Each output is further scaled to represent the corresponding NPI stringency levels: three for ’Cancel public events’, ’Close public transport’, and ’Restrictions on internal movement’; four for ’School closing’, ’Workplace closing’, and ’Stay at home’; five for ’Restrictions on gatherings’ and ’International travel controls’.
The initial population uses orthogonal initialization of weights in each layer with a mean of 0 and a standard deviation of 1 [58]. The population size is 250 and the top 6% of the population is carried over as elites. Parents are selected by tournament selection of the top 20% of candidates using the NSGAII algorithm [9]
. Recombination is performed by uniform crossover at the weightlevel, and there is a 20% probability of multiplying each weight by a mutation factor, where mutation factors are drawn from
.Prescriptor candidates are evaluated according to two objectives: (1) the expected number of cases over the next 180 days according to the prescribed NPIs, and (2) the total stringency of the prescribed NPIs, serving as a proxy for the economic cost of the NPIs. Both objectives have to be minimized. The evaluation is done on the 20 countries with the most deaths in the historical data: United States, United Kingdom, Italy, France, Spain, Brazil, Belgium, Germany, Iran, Canada, Netherlands, Mexico, China, Turkey, Sweden, India, Ecuador, Russia, Peru, Switzerland.
On the evaluation start date, each Prescriptor is fed with the last 21 days of case information. Its outputs are used as the NPIs at the evaluation start date, and combined with the NPIs for the previous 20 days. These 21 days of case information and NPIs are given to the Predictor as input, and it outputs the predicted case information for the next day. This output is used as the most recent input for the next day, and the process continues for the next 180 days. At the end of the process, the average number of predicted new cases over the 180day period is used as the value of the first objective. Similarly, the average of daily stringencies of the prescribed NPIs over the 180day period is used as the value for the second objective.
After each candidate is evaluated in this manner, the next generation of candidates is generated. Evolution is run for 110 generations, or approximately 72 hours, on a single CPU host. During the course of evolution, candidates are discovered that are increasingly more fit along the two objectives. In the end, the collection of candidates that represent best possible tradeoffs between objectives (the Pareto front, i.e. the set of candidates that are better than all other candidates in at least one objective) is the final result of the experiment (Figure 6). From this collection, it is up to the human decision maker to pick the tradeoff that achieves a desirable balance between cases and cost. Or put in another way, given a desired balance, the ESP system will find the best to achieve it (i.e. with the lowest cost and the lowest number of cases).
To illustrate these different tradeoffs, Figure 7 shows the NPI Presprictions and the resulting forecasts for four different Prescriptors from the Pareto front for one country (Italy). The Prescriptor that minimizes cases prescribes the most stringent NPIs across the board, and as a result, the number of cases is minimized effectively. The Prescriptor that minimizes NPI stringency lifts all NPIs right away, and the number of cases explodes as a result. The third Prescriptor was chosen from the middle of the Pareto front, and it represents one particular way to balance the two objectives. It lifts most of the NPIs, allows some public events, and keeps the schools and workplaces closed. As a result, the number of cases is still minimized, albeit slightly slower than in the most stringent case. Lifting more of the NPIs, in particular workplace restrictions, would cause the number of cases to start climbing. In this manner, the decision maker may explore the Pareto front, finding a point that achieves the most desirable balance of cases and cost for the current stage of the pandemic.
The shadowed area in Figures 7 and represents the uncertainty of the prediction, i.e. one standard deviation of the 100 Monte Carlo rollouts under uncertainty estimated through RIO. It is often asymmetric because there is more variance in how the pandemic can spread than how it can be contained. While uncertainty is narrow with stringent Prescriptors (Figure 7), with less stringent ones it often increases significantly with time. The increase can be especially dramatic with Prescriptors with minimal NPIs, such as those in Figures 7 and , where the uncertainty actually exceeds the display area. The reason is that there has not yet been much training data in this area, i.e. the stage where countries are lifting most NPIs after the peak of the pandemic has passed. The model’s suggestions at this point should be taken as indicative only; in the future these confidence estimates are likely to become more narrow and more reliable conclusions can be drawn then. However, this result can already be interpreted to suggest that such minimalNPI prescriptions are fragile, and make the country vulnerable to subsequent waves of the pandemic (see also Figures 8 and ).
To illustrate this process, Figure 8 shows possible choices for three different countries at different stages of the pandemic. For Brazil, where the pandemic is still spreading rapidly, a relatively stringent Prescriptor 4 allows some freedom of movement without increasing the cases much compared to full lockdown. For US, where the number of cases has been relatively flat, a less stringent Prescriptor 7 may be chosen, limiting restrictions to schools, workplaces, and public events. However, if NPIs are lifted too much, opening up the workplaces completely, high numbers of cases are likely to return. For Iran, where there is a danger of a second wave, Prescriptor 6 provides more stringent NPIs to prevent cases from increasing, still limiting the restrictions to schools, workplaces and public events.
Interestingly, across several countries at different stages of the pandemic, a consistent pattern emerges: in order to keep the number of cases flat, other NPIs can be lifted gradually, but workplace and school restrictions need to be in effect much longer. Indeed these are the two activities where people spend a lot of time with other people indoors, where it is possible to be exposed to significant amounts of the virus [32, 53, 41]. In other activities, such as gatherings and travel, they may come to contact with many people briefly and often outdoors, mitigating the risk. Therefore, the main conclusion that can already be drawn from these preliminary prescription experiments is that it is not the casual contacts but the extended contacts that matter. Consequently, when planning for lifting NPIs, attention should be paid in particular to how workplaces and schools can be opened safely.
Another interesting conclusion can be drawn from Figure 8(c): Alternating between weeks of opening workplaces and partially closing them may be an effective way to lessen the impact on the economy while reducing cases. This solution is interesting because it shows how evolution can be creative and find surprising and unusual solutions that are nevertheless effective. There is of course much literature documenting similar surprising discoveries in computational evolution [23, 44, 39], but it is encouraging to see that they are possible also in the NPI optimization domain. While on/off alternation of school and workplace closings may sound unwieldy, it is a real possibility [6]. Note also that it is the only creative solution available to the Prescriptor: there are no options in its output for e.g. alternating remote and inperson work, extending school to wider hours, improving ventilation, moving classes outside, or other ways of possibly reducing exposure. How to best implement such distancing at school and workplace is left for human decision makers at this point; the model, however, makes a suggestion that coming up with such solutions may make it possible to lift the NPIs gradually, and thereby avoid secondary waves of cases.
Thus, in the early stages, the ESP approach may suggest how to “flatten the curve”, i.e. what NPIs should be implemented in order to slow down the spread of the disease. At later stages, it may recommend how the NPIs can be lifted and the economy restarted safely. A third role for the approach is to go back in time and evaluate counterfactuals, i.e. how well NPI strategies other than those actually implemented could have worked. In this manner, it may be possible to draw conclusions not only about the accuracy and limitations of the modeling approach, but also lessons for future waves of the current pandemic, for new regions where it is still spreading (like Africa), as well as for future pandemics.
For instance in the UK on March 16th, the NPIs actually in effect were the mild ’recommend work from home’ and ’recommend cancel public events’. With only these NPIs, the predicted number of cases could have been quite high (Figure 9). A lockdown was implemented on March 24th, and the actual case numbers were significantly smaller. However, it is remarkable that Prescriptor 8 would have required closing schools already on March 16th, and the predicted cases could have been much fewer even without a more extensive lockdown. Thus, the model suggests that an early intervention is crucial, and indeed other models have been used to draw similar conclusions [55]. What is interesting is that ESP suggests that it may be possible to control the pandemic with less than full lockdowns if acted early enough. Of course, the fully trained model was not available at that point, however these lessons may still be useful for countries, such as those in Africa, that are still in early stages, as well as for future pandemics.
Some of the limitations of the datadriven approach also become evident in retrospective studies. For instance Italy, where the pandemic took hold before most of the rest of the world, was supposed to be in a lockdown on March 16th (which started already on February 23rd). Yet, the model predicts that under such a lockdown (suggested e.g. by Prescriptor 0 for that date), the number of cases should have been considerably smaller than they actually were (Figure 9). Part of the reason may be that the population did not adhere stringently to the NPIs at that point; after all, the full scale of the pandemic was not yet evident. The population in Italy may also have been older and more susceptible. The data used to train the model comes from 20 different countries and at a later stage of the pandemic spread, and these populations may have followed social distancing more carefully—therefore, the model’s predictions on the effectiveness of lockdowns are too optimistic for Italy. To improve, local factors like culture, economy, population demographics, density, and mobility, may need to be taken into account in the models. Also, this result suggests that the implementation of NPIs need to be sensitive to such factors in order to be effective.
Retrospective studies also show that the epidemic needs to be well underway for the predictions to be reliable. For instance, Italy only had 0 to three cases per day until February 22, when they jumped to 17 in one single day. Trying to predict the pandemic e.g. on March 1st does not lead accurate results. The spread of the virus at that stage depends on individual events, like a church gathering, an executive business meeting in a hotel, or even a soccer match between Liverpool and Atletico Madrid [21], that are hard to predict.
Overall, however, the datadriven ESP approach works surprisingly well even with the current limited data, and can be a useful tool in understanding and dealing with the pandemic. An interactive demo, available on the web, that makes it possible to explore prescriptions and outcomes of the ESP model like those reviewed in this section, will be described next.
To help understand the mechanisms and possibilities of ESP models, an interactive demo of the current state of the approach to NPI optimization is available at http://evolution.ml/esp/npi (Figure 10). This demo will change as the models improve and new functionality is added.^{1}^{1}1The examples presented in this paper can be replicated by with the appropriate snapshot of the demo: http://evolution.ml/demos/npidashboard/?forecast_folder=20200526_000000 for Figures 7, 8 and 10, and http://evolution.ml/demos/npidashboard/?forecast_folder=20200317_000001 for Figure 9.. At the time of this writing, the following interactions are possible:
The user can select a country by clicking on the map, and a Prescriptor from the Pareto front by clicking on the slider between Cases and NPIs. At the very left, the Presciptors prefer to minimize cases and therefore usually recommend establishing nearly all possible NPIs. At the very right, the Prescriptors prefer to minimize NPIs and therefore usually recommend lifting nearly all of them—usually resulting in an explosion of cases. The most interesting Prescriptors are therefore somewhere in the middleleft of this range. Some of them are able to keep the cases flat while lifting most of the NPIs, as was discussed in Section 6.
The cases are plotted over time in the middle of the page. The bars indicate past history, used to initialize the Predictor, and future predictions as a line plot with confidence bounds towards the right. The prescribed NPIs are shown over time in the chart at the bottom of the page, with darker colors indicating more stringent version of each NPI. For the Prescriptors that balance the cases and number of NPIs, it is often possible to see an alternating pattern of stringency over time.
With the demo it is possible to explore the options for different countries at different stages of the pandemic. However, it is important to keep in mind that the demo is only a demonstration of the potential of the ESP approach: With current limited data it is not yet possible to make reliable recommendations in a particular case and especially with less stringent NPIs. In the aggregate, it is possible to draw general conclusions, as was done above. With more and better data and further development, the demo may eventually develop into a tool that can be used to augment human decision making in the pandemic.
Given the encouraging results in this paper, the most compelling direction of future work consists of updating the model with new data as it becomes available. As NPIs are gradually lifted in many countries, the volume of data will increase, but data will also be more relevant for making decisions in the future. COVID19 testing will hopefully improve as well so that the outcome measures will be more reliable. The models can be extended to predicting and minimizing deaths as well as cases. Such a multitask learning environment should make predictions in each task more accurate [40]. Instead of the current six NPIs with a few stringency levels, data on more finegrained and detailed NPIs may become available, as well as data on more finegrained locations, such as US counties. In other words, data will improve in volume, relevance, accuracy, and detail, all of which will help make the predictors more precise, and thereby improve prescriptions.
Technically the most compelling direction is to take advantage of multiple prediction models, in particular more traditional compartmental or network models reviewed in Section 2. General assumptions about the spread of the disease are built in to these models, and they can thus serve as a stable reference when data is otherwise lacking in a particular case. On the other hand, it is sometimes hard to estimate the parameters that these models require, and datadriven models can be more mode accurate in specific cases. A particularly good approach might be to form an ensemble from these models (as is often done in machine learning; [67, 45]), and thereby combine their predictions systematically to maximize accuracy.
Another way to make the system more accurate and useful is to improve the outcome measures. Currently the cost of the NPIs is proxied based on how many of them are implemented and at what stringency level. Economic impact is difficult to measure, and the current approach, however approximate, already works relatively well. However, it may be possible to develop more accurate measures based on a variety of economic indicators, such as unemployment, consumer spending, and GNP. They need to be developed for each country separately, given different social and economic structures. With such measures, ESP would be free to find surprising solutions that, while stringent, may not have as high an economic impact.
The retrospective example of Italy in Figure 9 suggests that it may be difficult to transfer conclusions from one country to another, and to make accurate recommendations early on in the epidemic. An important future analysis will be to analyze systematically how much data and what kind of data is necessary. For instance, if the model had been developed based on the data from China, would it have worked for Iran and Italy? Or after China, Iran, and Italy, would it have worked for the US and the rest of Europe? That is, how much transfer is there between countries and how many scenarios does the model need to see before it becomes reliable? The lessons can be useful for the rest of the COVID19 pandemic, as well as for future pandemics.
An important aspect of any decision system is to make it trustworthy, i.e. estimate confidence in its decisions and predictions, allow users to utilize their expert knowledge and explore alternatives, and explain the decision recommendations. The first step was already taken in this study by applying the RIO uncertainty estimation method (Section 5.3) to the predictions. This approach may be improved in the future by grouping the countries according to original predictor performance, then training a dedicated RIO model for each group. In this way, each RIO model focuses on learning the predictive uncertainty of countries with similar patterns, so that the estimated confidence intervals become more reliable. As a further step, the estimated uncertainty can be used by the Prescriptor to make safer decisions.
Second, a prescription “scratchpad” can be included, allowing the user to not only see the prescription details (as shown in Section 7), but also modify them by hand. In this manner, before any prescriptions are deployed, the user can utilize expert knowledge that may not be available for ESP. For instance, some NPIs in some countries may not be feasible or enforceable at a given time. The interface makes it possible to explore alternatives, and see the resulting outcome predictions immediately. In this manner, the user may find more refined prescriptions than those proposed by ESP, or convince him/herself that they are unlikely to exist.
Third, currently the prescriptions are generated by an evolved neural network, which may perform well in the task, but does not provide an explanation of how and why it arrived at a given prescription. In the future, it may be possible to evolve explicit rule sets for this task as well, or instead [59, 25]. Rule sets are readable, specifying which feature values in the context lead to which prescriptions. They can therefore be used to generate explanations of the learned decision strategies, and thereby make it easier for human decision makers to understand and trust them.
While the current ESP model of NPI optimization is a promising demonstration, if enough better data becomes available in the next few months, it may be possible to use the tool during the current COVID19 pandemic to make useful recommendations. The general approach can also be developed further in the long term, eventually allowing decision makers to minimize the impact of future pandemics.
Recent advances in AI have made it possible to not only predict what would happen, but also prescribe what should be done. Also, widely available data has recently made it possible to build datadriven models that are surprisingly accurate in their predictions. This paper puts these two advances together to derive recommendations for NPIs in the current COVID19 crisis. While preliminary, the model already leads to insights in which NPIs are most important to get right, as well as how they might be implemented most effectively. With further data and development, the approach may become a useful tool for policy makers, helping them to minimize impact of the current as well as future pandemics.
Mitigating economic impacts of the COVID19 pandemic and preserving u.s. strategic competitiveness in artificial intelligence
. White Paper Series on Pandemic Response and Preparedness Technical Report 2, National Security Commission on Artificial Intelligence. Cited by: §1.A fast and elitist multiobjective genetic algorithm: NSGAII
. IEEE Transactions on Evolutionary Computation 6, pp. 181–197. Cited by: §6.Flavorcyberagriculture: optimization of plant metabolites in an opensource control environment through surrogate modeling.
. PLOS ONE. Note: https://doi.org/10.1371/journal.pone.0213918 Cited by: §1, §3, §3, §6.Designing neural networks through evolutionary algorithms
. Nature Machine Intelligence 1, pp. 24–35. External Links: Link Cited by: §3, §6.
Comments
There are no comments yet.