Effective response to disease outbreaks depends on reliable estimates of their status. This process of identifying new outbreaks and monitoring ongoing ones — disease surveillance — is a critical tool for policy makers and public health professionals .
The traditional practice of disease surveillance is based upon gathering information from in-person patient visits. Clinicians make a diagnosis and report that diagnosis to the local health department. These health departments aggregate the reports to produce local assessments and also pass information further up the government hierarchy to the national health ministry, which produces national assessments . Our previous work describes this process with a mathematical model .
This approach is accepted as sufficient for decision-making [20, 29] but is expensive, and results lag real time by anywhere from a week  to several months [2, 18]. Novel surveillance systems that use disease-related internet activity traces such as social media posts, web page views, and search queries are attractive because they would be faster and cheaper [28, 30]. One can conjecture that an increase of influenza-related web searches is due to an increase in flu observations by the public, which in turn corresponds to an increase in real influenza activity. These systems use statistical techniques to estimate a mapping from past activity to past traditional surveillance data, then apply that map to current activity to predict current (but not yet available) traditional surveillance data, a process known as nowcasting.
One specific concern with this approach is that these models can learn coincidental, rather than informative, relationships . For example, Bodnar and Salathé found a correlation between zombie-related Twitter messages and influenza . More quantitatively, Ginsberg built a flu model using search queries selected from 50 million candidates by a purely data-driven process that considered correlation with influenza-like-illness in nine regions of the United States . Of the 45 queries selected by the algorithm for inclusion in the model, 6 (13%) were only weakly related to influenza (3 categorized as “antibiotic medication”, 2 “symptoms of an unrelated disease”, 1 “related disease”). Of the 55 next-highest-scoring candidates, 8 (15%) were weakly related (3, 2, and 3 respectively) and 19 (35%) were “unrelated to influenza”, e.g. “high school basketball”. That is, even using a high-quality, very computationally expensive approach that leveraged demonstrated historical correlation in nine separate geographic settings, one-third of the top 100 features were weakly related or not related to the disease in question. Fig 1 illustrates this problem for flu using contagiousness-related web searches.
Such features, with a dubious real link to the quantity of interest, pose a risk that the model may perform well during training but then provide erroneous estimates later when coincidental relationships fail, especially if they do so suddenly. We have previously proposed a metric called deceptiveness to quantify this risk . This metric quantifies the fraction of an estimate that depends on noise (coincidental relationships between input and output data) rather than signal (informative relationships) and is a real number between 0 and 1 inclusive. We hypothesize that disease nowcasting models that leverage the deceptiveness of input features have better accuracy than those that do not.
This is an important question because disease forecasting is improved by better nowcasting. For example, Brooks ’s top-performing entry  to the CDC’s flu forecasting challenge  was improved by nowcasting. Lu tested autoregressive nowcasts using several internet data sources and found that they improved 1-week-ahead forecasts . Kandula measured the benefit of nowcasting to their flu forecasting model at 8–35% 
. Finally, our own work shows that a Bayesian seasonal flu forecasting model using ordinary differential equations benefits from filling a one-week reporting delay with internet-based nowcasts.
The present work tests this hypothesis using five seasons of influenza-like-illness (ILI) in the United States (2011–2016). We selected U.S. influenza because high-quality reference data are easily available and because it is a pathogen of great interest to the public health community. Although flu is often considered a mild infection, it can be quite dangerous for some populations, including older adults, children, and people with underlying health conditions. Typical U.S. flu seasons kill ten to fifty thousand people annually .
Our experiment is a simulation study followed by validation using real data. This lets us test our approach using fully-known deceptiveness as well as a more realistic setting with estimated deceptiveness. We trained linear estimation models to nowcast ILI using an extension of ridge regression  called generalized ridge  or gridge regression that lets us apply individual weights to each feature, thus expressing a prior belief on their value: higher value for lower deceptiveness. We used three classes of input features: (1) synthetic features constructed by adding plausible noise patterns to ILI, (2) Google search volume on query strings related to influenza, and (3) Google search volume on inferred topics related to influenza.
We found that accurate deceptiveness knowledge did indeed reduce prediction error, and in the case of the automatically-generated query string features, as much or more than topic features that require human curation. We also found that semantic distance as derived from the Wikipedia article category tree served as a useful proxy for deceptiveness.
The remainder of this paper is organized as follows. We next describe our data sources, regression approach, and experiment structure. After that, we describe our results and close with their implications and suggestions for future work.
Our study period was five consecutive flu seasons, 2011–2012 through 2015–2016, using weekly data. The first week in the study was the week starting Sunday, July 3, 2011, and the last week started Sunday, June 26, 2016, for a total of 261 consecutive weeks. We considered each season to start on the first Sunday in July, and the previous season to end on the day before (Saturday).
We used gridge regression to fit input features to U.S. ILI over a subset of the first three seasons, then used the fitted coefficients to estimate ILI in the fourth and fifth seasons. (We used this training schedule, rather than training a new model for each week as one might do operationally, in order to provide a more challenging setting to better differentiate the models.) We assessed accuracy by comparing the estimates to ILI using three metrics: (the square of Pearson correlation), root mean squared error (RMSE), and hit rate.
The experiment is a full factorial experiment with four factors, yielding a total of 225 models:
Class of input features (3 levels): synthetic features, search query string volume, and search topic volume.
Training period (3): one, two, or three consecutive seasons.
Noise added to deceptiveness (5): perfect knowledge of deceptiveness to none at all.
Model type (5): ridge regression and four levels of gridge regression.
This procedure is implemented in a Python program.
The remainder of this section describes our data sources, regression algorithm, experimental factors, and assessment metrics in detail.
2.1 Data sources
We used four types of data in this experiment:
Reference data. U.S. national ILI from the Centers for Disease Control and Prevention (CDC). This is a weekly estimate of influenza incidence.
Synthetic features. Weekly time series computed by adding specific types of systematic and random noise to ILI. These simulated features have known deceptiveness.
Flu-related concepts. We used the crowdsourced Wikipedia category tree to enumerate a set of concepts and estimate the semantic relatedness of each to influenza.
Real features. Two types of weekly time series: Google search query strings and Google search topics. These features are based on the flu-related concepts above and use estimated flu relatedness as a proxy for deceptiveness.
This section explains the format and quirks of the data, how we obtained them, and how readers can also obtain them.
2.1.1 Reference data: U.S. influenza-like illness (ILI) from CDC
Influenza-like illness (ILI) is a syndromic metric that estimates influenza incidence, i.e., the number of new flu infections. It is the fraction of patients presenting to the health care system who have symptoms consistent with flu and no alternate explanation . The basic process is that certain clinics called sentinel providers report the total number of patients seen during each interval along with those diagnosed with ILI. Local, provincial, and national governments then collate these data to produce corresponding ILI reports .
U.S. ILI values tend to range between 1–2% during the summer and 7–8% during a severe flu season [5, 7]. While an imperfect measure (for example, it is subject to reporting and behavior biases if some groups, like children, are more commonly seen for ILI ), it is considered sufficiently accurate for decision-making purposes by the public health community [20, 29].
In this study, we used weekly U.S. national ILI downloaded from the Centers for Disease Control and Prevention (CDC)’s FluView website  on December 21, 2016, six months after the end of the study period. This delay is enough for reporting backfill to settle sufficiently . Fig 1 illustrates these data, and they are available as an Excel spreadsheet in S1 Dataset (ILI.xls).
2.1.2 Synthetic features: Computed by us
These simulated features are intended to model a plausible way in which internet activity traces with varying usefulness might arise. Their purpose is to provide an experimental setting with features that are sufficiently realistic and have known deceptiveness.
Each synthetic feature is a random linear combination of ILI , Gaussian random noise , and systematic noise
(all vectors with 261 elements, one for each week):
A feature’s deceptiveness is simply the weight of its systematic noise: .
is a random multivariate normal vector with standard deviation 1. Systematic noiseis a random linear combination of seven basis functions :
These basis functions, illustrated in Fig 2, simulate sources of systematic noise for internet activity traces. They fall into three classes:
Oprah effect : 3 types. These simulate pulses of short-lived traffic driven by media interest. For example, U.S. Google searches for measles were 10 times higher in early 2015 than any other time during the past five years , but measles incidence peaked in 2014 .
The three specific bases are: annual , a pulse every year shortly after the flu season peak; fore , pulses during both the training and test seasons (second, fourth, and fifth); and late , pulses only during the test seasons (fourth and fifth). The last creates features with novel divergence after training is complete, producing deceptive features that cannot be detected by correlation with reference data.
Drift: 2 types. These simulate steadily changing public interest. For example, as the case definition of autism was modified, the number of individuals diagnosed with autism increased .
The two bases are: steady , a slow change over the entire study period of five seasons, and late , a transition from one steady state to another over the fourth season. The latter again models novel divergence.
Cycle: 2 types. This simulates phenomena that have an annual ebb and flow correlating with the flu season. An example is the U.S. basketball season noted above.
The two bases are: annual , cycles continuing for all five seasons, and ending , cycles that end after the training seasons. The latter again models novel divergence.
In order to build one feature, we need to sample the nine elements of the weight vector , which sums to 1. This three-step procedure is as follows. First, , , and are sampled from a Dirichlet distribution:
Next, the relative weight of the three types of systematic noise is sampled:
Finally, all the weight for each type is randomly assigned with equal probability to a single basis function:
2.1.3 Flu-related concepts: Wikipedia
In order to build the flu nowcasting model based on web search queries described in the next section, we first needed a set of influenza-related concepts. We used the Wikipedia inter-article link graph to generate a set of candidate concepts and the Wikipedia article category hierarchy to estimate the semantic relatedness of each concept to influenza, which we use as a proxy for deceptiveness. An important advantage of this approach is that it is automated and easily generalizable to other diseases.
Wikipedia is a popular web-based encyclopedia whose article content and metadata are crowdsourced . We used two types of metadata from a dataset  collected March 24, 2016 for our previous work  using the Wikipedia API.
First, Wikipedia articles contain many hyperlinks to other articles within the encyclopedia. This work used the article “Influenza” and the 572 others it links to, including clearly related articles such as “Infectious disease” and apparently unrelated ones such as “George W. Bush”, who was the U.S. president immediately prior to 2009 H1N1.
Second, Wikipedia articles are leaves in a category hierarchy. Both articles and categories have one or more parent categories. For example, one path from “Influenza” to the top of the tree is: Healthcare-associated infections, Infectious diseases, Diseases and disorders, Health, Main topic classifications, Articles, and finally Contents. This tree can be used to estimate semantic relatedness between two articles. The number of levels one must climb the tree before finding a common category is a metric called category distance ; the distance between an article and itself is 1. For example, the immediate categories of “Infection” include Infectious diseases. Thus, the distance between these two articles is 2, because we had to ascend two levels from “Influenza” before discovering the common category Infectious diseases.
We used category distance between each of the 573 articles and “Influenza” to estimate the semantic relatedness to influenza. The minimum category distance was 1 and the maximum 7.
The basic intuition for this approach is that Wikipedia category distance is a reasonable proxy for how related a concept is to influenza, and this relatedness is in turn a reasonable proxy for deceptiveness. For example, consider a distance-1 feature and a distance-7 feature that are both highly correlated with ILI. Standard linear regression will give equal weight to both features. However, we conjecture that the distance-7 feature’s correlation is more likely to be spurious than the distance-1’s; i.e., we posit that the distance-7 feature is more deceptive. Thus, we give the distance-1 feature more weight in the regression, as described below.
Because category distance is a discrete variable , while deceptiveness is continuous, we convert category distance into a deceptiveness estimate as follows:
The purpose of is to ensure that features with minimum category distance of 1 receive regularization from the linear regression, as described below. In our initial data exploration, the value of had little effect, so we used ; therefore, . We emphasize that category distance is already a noisy proxy for deceptiveness. Even with zero noise added to deceptiveness, .
It is important to realize that because Wikipedia is continually edited, metadata such as links and categories change over time. Generally, mature topic areas such as influenza and infectious disease are more stable than, for example, current events. The present study assumes that the dataset we used is sufficiently correct despite its age; i.e., freshly collected links and categories might be somewhat different but not necessarily more correct.
The articles used and their category distances are in S2 Dataset (en+Influenza.xlsx).
2.1.4 Real features: Google searches
Typically, each feature for internet-based disease surveillance estimates public interest in a specific concept. This study uses Google search volume as a measure of public interest. By mapping our Wikipedia-derived concepts to Google search queries, we obtained a set of queries with estimated deceptiveness. Then, search volume over time for each of these queries, as well as their deceptiveness, are input for our algorithms.
We tested two types of Google searches. Search query strings are the raw strings typed into the Google search box. We designed an automated procedure to generate query strings from Wikipedia article titles. Search topics are concepts assigned to searches by Google using proprietary and unspecific algorithms. We built a map from Wikipedia article titles to topics manually.
Our procedure to map articles to query strings is:
Decode the percent notation in the title portion of the article’s URL .
Change underscores to spaces.
Approximate non-ASCII characters with ASCII using the Python package unidecode .
Change upper case letters to lower case. (This serves a simplifying rather than necessary purpose, as the Google Trends API is case-insensitive.)
Remove all characters other than alphanumeric, slash, single quote, space, and hyphen.
Remove stop phrases we felt were unlikely to be typed into a search box. Matches for the following regular expressions were removed (note leading and trailing spaces):
“^influenza .? virus subtype ”
“^list of ”
This produces a query string for all 573 articles. The map is 1-to-1: each article maps to exactly one query string, and each query string maps to exactly one article. The process is entirely automated once the list of stop phrases is developed.
Google search topics is a somewhat more amorphous concept. Searches are assigned to topics by Google’s proprietary machine learning algorithms, which are not publically available. A given topic is identified by its name or a hexadecimal code. For example, the query string “apple” might be assigned to “Apple (fruit)” or “Apple (technology company)” based on the content of the full search session or other factors.
To manually build a mapping between Wikipedia articles and Google search topics, we entered the article title and some variations into the search box on the Google Trends website  and then selected the most reasonable topic named in the site’s auto-complete box. The topic code was in the URL. If the appropriate topic was unclear, we discussed it among the team. Not all articles had a matching topic; we identified 363 topics for the 573 articles (63%). Among these 363 articles, the map is 1-to-1.
We downloaded search volume for both query strings and topics from the Google Health Trends API  on July 31, 2017. This gave us a weekly time series for the United States for each search query string and topic described above. These data appear to be a random sample, changing slightly from download to download.
Each element of the time series is the probability that the reported search session contains the string or topic, multiplied by 10 million. Searches without enough volume to exceed an unspecified privacy threshold are set to zero, i.e., we cannot distinguish between few searches and no searches. For this reason, we removed from analysis searches with more than 2/3 of the 261 weeks having a value of zero. This resulted in 457 usable of 573 query strings (80%) and 349 of 363 topics (96%). Fig 5 shows two of these time series.
Our map and category distances are in S2 Dataset (en+Influenza.xlsx). Google’s terms of service prohibit redistribution of the search volume data. However, others can request the data using the same procedure we used . We used the script ght_get in the experiment source code for downloading. This script depends on a patched source code file originally provided by Google that has an unclear license; therefore, we cannot redistribute this code. Access to the Google source code is granted with the data, and we do provide the patch.
2.2 Gridge regression
Linear regression is a popular approach for mapping features (inputs) to observations (output). This section describes the algorithm and the extensions we used in our experiment to incorporate deceptiveness information.
The model for linear regression is
where is an observation vector, is an feature matrix where is the standardized feature vector (i.e., mean centered and standard deviation scaled) corresponding to feature , is a coefficient vector, , where
is a multivariate normal distribution with meanand covariance matrix , is a scalar, and is an identity. Standardizing the features of is a convention that places all features on the same scale.
The goal of linear regression is to find the estimate of that minimizes the sum-of-squared residual errors. The ordinary least squares (OLS) estimator solves the following:
or in matrix form:
is the unbiased, minimum variance estimator of, assuming is normally distributed [31, ch. 1–2]. A prediction corresponding to a new feature vector is .
While is unbiased, it is often possible to construct an estimator with smaller expected prediction error — i.e., with predictions on average closer to the true value — by introducing some amount of bias through regularization, which is the process of introducing additional information beyond the data. Also, regularization can make regression work in situations with more features than observations, like ours.
One popular regularization method is called ridge regression , which extends OLS by encouraging the coefficients to be small. This minimizes:
The additional parameter controls the strength of regularization. When , this is equivalent to OLS. As increases, the coefficient vector is constrained towards zero more vociferously. Ridge regression applies the same degree of regularization to each feature, as is common to all features.
A second extension, called generalized ridge regression  or gridge regression, adds a feature-specific modifier to the regularization:
adjusts the regularization penalty individually for each feature (ridge regression is a special case where ). Gridge retains closed-form solvability:
where is a diagonal matrix with on the diagonal and zero on the off-diagonals.
Gridge regression allows us to incorporate feature-specific deceptiveness information by making a function of feature ’s deceptiveness. The more deceptive feature , the larger .
2.3 Experiment factors
Our experiment had 225 conditions. This section describes its factors: input feature class (3 levels), training period (3), deceptiveness noise added (5), and regression type (5).
2.3.1 Input feature class
We tested three classes of input features:
Synthetic. Randomly generated transformations of ILI, as described above in §2.1.2.
Search query string. Volume of Google searches entered directly by users, as described above in §2.1.4.
Search topic. Volume of Google search topics inferred by Google’s proprietary algorithms, as described above in §2.1.4.
Each feature comprises a time series of weekly data, with frequency and alignment matching our ILI data.
2.3.2 Training period
We tested three different training periods: 1st through 3rd seasons inclusive (three season), 2nd and 3rd (two seasons), and 3rd only (one season). Because the 4th season contains transitions in the synthetic features, we did not use it for training even when testing on the 5th season.
2.3.3 Deceptiveness noise added
The primary goal of our study is to evaluate how much knowledge of feature deceptiveness helps disease incidence models. In the real world, this knowledge will be imperfect. Thus, one of our experiment factors is to vary the quality of feature deceptiveness knowledge.
Our basic approach is to add varying amounts of noise to the best available estimate of each feature’s deceptiveness . Recall that for synthetic features, is known exactly, while for the search-based features, is an estimate based on the Wikipedia category distance.
To compute the noise-added deceptiveness for feature , for noise added , we simply select a random other feature and mix with its deceptiveness: . There are five levels of this factor:
Zero noise: , i.e., the model gets the best available estimate of .
Low noise: .
Medium noise: .
High noise: .
Total noise: , i.e., the model gets no correct information at all about .
Models do not know what condition they are in; they get only , not .
2.3.4 Regression type
We tested five types of gridge regression:
Ridge regression: , i.e., ignore deceptiveness information.
Threshold gridge regression: keep features with category distance and discard them otherwise, as in . This is implemented as a threshold , which is applicable to both search and synthetic features (which have no ).
Linear fridge: .
Quadratic fridge: .
Quartic fridge: .
These levels are in rough ascending order of deceptiveness importance. (We additionally tested, but do not report, a few straw-man models to help identify bugs in our code.)
All models used , obtained by 10-fold cross-validation . For each model, we tested 41 values of evenly log-spaced between and ; each fold fitted a model on the 9 folds left in and then evaluated its RMSE on the one fold left out. The with the lowest mean RMSE (plus a bias of up to 0.02 to encourage s in the middle of the range) across the 10 folds, was reported as the best for that model. We then used the mean of these best s for our experiment.
2.4 Assessment of models
To evaluate a model, we apply its coefficients learned during the training period to input features during the 52 weeks of the fourth and fifth seasons respectively, yielding estimated ILI . We then compare to reference ILI for each of the two test seasons. For each model and metric, this yields two scalars.
We report three metrics:
, the square of the Pearson correlation . Most previous work reports this unitless metric.
Root mean squared error (RMSE), defined as:
has interpretable units of ILI.
Hit rate is a measure of how well a prediction captures the direction of change. It is defined as the fraction of weeks when the direction of the prediction (increase or decrease) matches the direction of the reference data :
Because it captures the trend (is the flu going up or down?), it directly answers a simple, relevant, and practical public health question.
Output of our regression models is illustrated in Fig 6, which shows 15 selected conditions. These conditions are close to what would be done in practice: use all available training information and add no noise. The differences between gridge regression types are subtle, but they are real, and close examination shows that the stronger gridge models that place higher importance on deceptiveness information are closer to the ILI reference data. The remainder of this section analyzes these differences across all the conditions.
Fig 7 illustrates the effect on error (RMSE) of adding noise to deceptiveness information; is similar. Generally, the gridge algorithms have lower error than plain ridge in lower-added-noise conditions and higher error in higher-added-noise situations. That is, conditions with better knowledge of deceptiveness outperform the baseline, and performance declines as deceptiveness knowledge worsens, which is the expected trend. This supports our hypotheses that (a) incorporating knowledge of feature deceptiveness can improve estimates of disease incidence based on internet data and (b) semantic distance, as expressed in the Wikipedia article category tree, is an effective proxy for deceptiveness. (Hit rate shows limited benefit for gridge, as we discuss below.)
Fig 8 summarizes the improvement of the four gridge algorithms over plain ridge for all three metrics in the zero- and low-noise conditions. These results suggest that adding low-noise knowledge of deceptiveness to ridge regression improves error but not hit rate. It also appears that the benefits of gridge level off between quadratic (deceptiveness squared) and quartic (deceptiveness raised to the fourth power): while quartic sometimes beats quadratic ridge, it frequently is worse than plain ridge.
We speculate that the lack of observed benefit of gridge on hit rate is due to one or both of two reasons. First, plain ridge may be sufficiently good on this metric that it has already reached diminishing returns; recall that in Fig 6 all five algorithms captured the overal trend of ILI well. Second, ILI is noisy, with lots of ups and downs from week to week regardless of the medium-term trend. This randomness may limit the ability of hit rate to assess performance on a weekly time scale without overfitting. That is, we believe that gridge’s failure to improve over plain ridge on hit rate is unlikely to represent a concerning flaw in the algorithm.
Our previous work introduced deceptiveness, which quantifies the risk that a disease estimation model’s error will increase in the future because it uses features that are coincidentally, rather than informatively, correlated with the quantity of interest . This work tests the hypothesis that incorporating deceptiveness knowledge into a disease nowcasting algorithm reduces error; to our knowledge, it is the first work to quantitatively assess this question. To do so, we used simulated features with known deceptiveness as well as two types of real web search features with deceptiveness estimated using semi- and fully-automated algorithms.
Our experiment yielded three main findings:
Deceptiveness information does help our linear regression nowcasting algorithms, and it helps more when it is more accurate.
A readily available, crowdsourced semantic relatedness measure, Wikipedia category distance, is a useful proxy for deceptiveness.
Deceptiveness information helps automatically generated features perform the same or better than similar, semi-automated features that require human curation.
The effects we measured are stronger for the synthetic features than the real ones. We speculate that this is for two reasons. First, the web search feature types are skewed towards low deceptiveness, because they are based on Wikipedia articles directly linked from “Influenza”, while the synthetic features lack this skew. Second, the synthetic features can have zero-noise deceptiveness information, while the real features cannot, because they use Wikipedia category distance as a less-accurate proxy. If verified, the second would further support the hypothesis that more accurate deceptiveness information improves nowcasts.
The third finding is interesting because it is relevant to a long-standing tension regarding how much human intervention is required for accurate measurements of the real world using internet data: more automated algorithms are much cheaper, but they risk oversimplifying the complexity of human experience. For example, our query strings were automatically generated from Wikipedia article titles, which are written for technical accuracy rather than salience for search queries entered by laypeople. To select features for disease estimation, one could use a fully-automated approach (e.g., our query strings), a semi-automated approach (e.g., our topics, which required a manual mapping step), or a fully-manual approach (e.g., by expert elicitation of search queries or topics, which we did not test).
One might expect that a trade-off would be present here: more automatic is cheaper, but more manual is more accurate. However, this was not the case in our results. The third box plot group in Fig 8 compares the gridge models using query string features to a baseline of plain ridge on topic features. Query strings perform favorably regardless of whether the baseline is plain ridge on query strings or topics, and sometimes the improvement is greater than gridge using topic features. This suggests that there is not really a trade-off, and fully automatic features might be among the most accurate.
All experiments are imperfect. Due to its finite scope, this work has many limitations. We believe that the most important ones are:
Wikipedia is changing continuously. While we believe that these changes would not have a material effect on our results, we have not tested this.
Wikipedia has non-semantic categories, such as the roughly 20,000 articles in category Good article that our algorithm that would assign distance 1 from each other. We have not yet encountered any other relevant non-semantic categories, and “Influenza” is not a Good article, so we believe this limitation does not affect the present results. However, any future work extending our algorithms should exclude these categories from the category distance computation.
The mapping from Wikipedia articles to Google query strings and topics has not been optimized. While we have presented mapping algorithms that are reasonable both by inspection and supported by our current and prior  results, we have not compared these algorithms to alternatives.
Linear regression algorithm metaparameters were not fully evaluated. For example, in §2.1.3 was thoughtfully but arbitrarily assigned rather than experimentally optimized.
Other methods of feature generation may be better. This experiment was not designed to evaluate the full range of feature generation algorithms. In particular, direct elicitation of features such as query strings and topics should be evaluated.
4.3 Future work
This is an initial feasibility study using a fairly basic nowcasting model. At this point, the notion of deceptiveness for internet-based disease estimation is promising, but continued and broader positive results are needed to be confident in this hypothesis. In addition to addressing the limitations above, we have two groups of recommendations for future work.
First, multiple opportunities to improve nowcasting performance should be investigated. Additional deceptiveness-aware fitting algorithms such as generalized lasso  and generalized elastic net  should be tested. Category distance also has opportunities for improvement. For example, it can be made finer-grained by measuring path length through the Wikipedia category tree rather than counting the number of levels ascended: the distance between “Influenza” and “Infection” would become 3, taking into account that Infectious diseases was a direct category of the latter. Finally, alternate deceptiveness estimates need testing, for example category distance based on the medical literature. In addition to better nowcasting, utility needs to be demonstrated when deceptiveness-aware nowcasts augment best-in-class forecasting models, such as those doing well in the CDC’s flu forecasting challenge .
Second, we are optimistic that our algorithms will generalize well to different diseases and locations. This is because our best feature-generation algorithm is fully automated, making it straightforward to generalize by simply offering new input. For example, to generate features for dengue fever in Colombia, one could: start with the article “Dengue fever” in Spanish Wikipedia; write a set of Spanish stop phrases; use the Wikipedia API to collect links from that article and walk the category tree; pull appropriate search volume data from Google or elsewhere; and then proceed as described above. Future studies should evaluate generalizability to a variety of disease and location contexts.
We present a study testing the value of deceptiveness information for nowcasting disease incidence using simulated and real internet features and generalized ridge regression. We found that incorporating such information does in fact help nowcasting; to our knowledge, ours is the first quantitative evaluation of this question.
Based on these results, we hypothesize that other internet-based disease estimation methods may also benefit from including feature deceptiveness estimates. We look forward to further research yielding deeper insight into the deceptiveness question.
Experiment concept: RP ARD DO. Methods: RP ARD DO (experiment), MB FO (mapping Wikipedia articles to Google searches), DO (statistical approach). Data curation, investigation, and software: RP ARD. Visualization: RP. Literature review: RP ARD. Writing: RP ARD DO. Review and editing: RP ARD MB FO DO. Funding acquisition and project administration: RP.
We thank the Wikipedia editing community for building the link and category networks used to compute semantic distance and Google, Inc. for providing us with search volume data. We also appreciate the helpful feedback provided by anonymous reviewers.
-  Phoebe Ayers, Charles Matthews, and Ben Yates. How Wikipedia works: And how you can be a part of it. September 2008.
-  Chi Y Bahk, David A Scales, Sumiko R Mekaru, John S Brownstein, and Clark C Freifeld. Comparing timeliness, content, and disease severity of formal and informal source outbreak reporting. BMC Infectious Diseases, 15(1), December 2015. doi:10.1186/s12879-015-0885-0.
-  Todd Bodnar and Marcel Salathé. Validating models for disease detection using Twitter. In WWW, 2013. doi:10.1145/2487788.2488027.
-  Logan C. Brooks, David C. Farrow, Sangwon Hyun, Ryan J. Tibshirani, and Roni Rosenfeld. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions. PLOS Computational Biology, 14(6), June 2018. doi:10.1371/journal.pcbi.1006134.
-  Centers for Disease Control and Prevention. Overview of influenza surveillance in the United States. Fact sheet, October 2016. URL: https://www.cdc.gov/flu/pdf/weekly/overview-update.pdf.
-  Measles data and statistics. Slide deck, Centers for Disease Control and Prevention (CDC), February 2018. URL: https://www.cdc.gov/measles/downloads/MeaslesDataAndStatsSlideSet.pdf.
-  Percentage of visits for influenza-like-illness reported by ILINet 2017–2018 season. Data table, Centers for Disease Control and Prevention (CDC), 2018. URL: https://www.cdc.gov/flu/weekly/weeklyarchives2017-2018/data/senAllregt08.html.
-  Centers for Disease Control and Prevention (CDC). FluView, 2017. URL: http://gis.cdc.gov/grasp/fluview/fluportaldashboard.html.
-  Epidemic Prediction Initiative. FluSight 2017–2018, 2018. URL: https://predict.phiresearchlab.org/post/59973fe26f7559750d84a843.
-  Jeremy Ginsberg, Matthew H. Mohebbi, Rajan S. Patel, Lynnette Brammer, Mark S. Smolinski, and Larry Brilliant. Detecting influenza epidemics using search engine query data. Nature, 457(7232), November 2008. doi:10.1038/nature07634.
-  Google Inc. Google Trends, 2017. URL: https://trends.google.com/trends/.
-  Compare Trends search terms - Trends help. User documentation, Google Inc., 2018. URL: https://support.google.com/trends/answer/4359550.
-  Google Inc. Health Trends - Research interest request, 2018. URL: https://docs.google.com/forms/d/e/1FAIpQLSdZbYbCeULxWAFHsMRgKQ6Q1aFvOwLauVF8kuk5W_HOTrSq2A/viewform.
-  William J. Hemmerle. An explicit solution for generalized ridge regression. Technometrics, 17(3), 1975.
-  Alison Presmanes Hill, Katharine Zuckerman, and Eric Fombonne. Epidemiology of autism spectrum disorders. In Maria de los Angeles Robinson-Agramonte, editor, Translational Approaches to Autism Spectrum Disorder. 2015. URL: http://link.springer.com/chapter/10.1007/978-3-319-16321-5_2, doi:10.1007/978-3-319-16321-5_2.
-  Arthur E. Hoerl and Robert W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1), February 1970. doi:10.1080/00401706.1970.10488634.
-  Dorothy M. Horstmann. Importance of disease surveillance. Preventive Medicine, 3(4), December 1974. doi:10.1016/0091-7435(74)90003-6.
-  Ruth Ann Jajosky and Samuel L Groseclose. Evaluation of reporting timeliness of public health surveillance systems for infectious diseases. BMC Public Health, 4(1), July 2004. doi:10.1186/1471-2458-4-29.
-  Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 103. 2013. URL: http://link.springer.com/10.1007/978-1-4614-7138-7, doi:10.1007/978-1-4614-7138-7.
-  Heather A Johnson, Michael M Wagner, William R Hogan, Wendy Chapman, Robert T Olszewski, John Dowling, and Gary Barnas. Analysis of Web access logs for surveillance of influenza. Studies in Health Technology and Informatics, 107(2), 2004. URL: http://www.ncbi.nlm.nih.gov/pubmed/15361003.
-  Sasikiran Kandula, Teresa Yamana, Sen Pei, Wan Yang, Haruka Morita, and Jeffrey Shaman. Evaluation of mechanistic and statistical methods in forecasting influenza-like illness. Journal of The Royal Society Interface, 15(144), July 2018. doi:10.1098/rsif.2018.0174.
-  Elizabeth C. Lee, Cécile Viboud, Lone Simonsen, Farid Khan, and Shweta Bansal. Detecting signals of seasonal influenza severity through age dynamics. BMC Infectious Diseases, 15(1), December 2015. doi:10.1186/s12879-015-1318-9.
-  Fred Sun Lu, Suqin Hou, Kristin Baltrusaitis, Manan Shah, Jure Leskovec, Rok Sosic, Jared Hawkins, John Brownstein, Giuseppe Conidi, Julia Gunn, Josh Gray, Anna Zink, and Mauricio Santillana. Accurate influenza monitoring and forecasting using novel internet data streams: A case study in the Boston metropolis. JMIR Public Health and Surveillance, 4(1), January 2018. doi:10.2196/publichealth.8950.
-  Luke Mondor, John S. Brownstein, Emily Chan, Lawrence C. Madoff, Marjorie P. Pollack, David L. Buckeridge, and Timothy F. Brewer. Timeliness of nongovernmental versus governmental global outbreak communications. Emerging Infectious Diseases, 18(7), July 2012. doi:10.3201/eid1807.120249.
-  Dave Osthus, Ashlynn R. Daughton, and Reid Priedhorsky. Even a good influenza forecasting model can benefit from internet-based nowcasts, but those benefits are limited. Under review at PLOS Comp Bio: PCOMPBIOL-D-18-00800, 2018.
-  Reid Priedhorsky, Dave Osthus, Ashlynn R. Daughton, Kelly R. Moran, and Aron Culotta. Deceptiveness of internet data for disease surveillance. arXiv:1711.06241 [cs, math, q-bio, stat], July 2018. URL: http://arxiv.org/abs/1711.06241, arXiv:1711.06241.
-  Reid Priedhorsky, Dave Osthus, Ashlynn R. Daughton, Kelly R. Moran, Nicholas Generous, Geoffrey Fairchild, Alina Deshpande, and Sara Y. Del Valle. Measuring global disease with Wikipedia: Success failure, and a research agenda (Supplemental data), 2016. URL: https://figshare.com/articles/Measuring_global_disease_with_Wikipedia_Success_failure_and_a_research_agenda_Supplemental_data_/4025916, doi:10.6084/m9.figshare.4025916.v1.
-  Reid Priedhorsky, David A. Osthus, Ashlyn R. Daughton, Kelly Moran, Nicholas Generous, Geoffrey Fairchild, Alina Deshpande, and Sara Y. Del Valle. Measuring global disease with Wikipedia: Success, failure, and a research agenda. In Computer Supported Cooperative Work (CSCW), 2017. doi:10.1145/2998181.2998183.
-  Melissa A. Rolfes, Ivo M. Foppa, Shikha Garg, Brendan Flannery, Lynnette Brammer, James A. Singleton, Erin Burns, Daniel Jernigan, Sonja J. Olsen, Joseph Bresee, and Carrie Reed. Annual estimates of the burden of seasonal influenza in the United States: A tool for strengthening influenza surveillance and preparedness. Influenza and Other Respiratory Viruses, 12(1), January 2018. doi:10.1111/irv.12486.
-  Mauricio Santillana, André T. Nguyen, Mark Dredze, Michael J. Paul, Elaine O. Nsoesie, and John S. Brownstein. Combining search, social media, and traditional data sources to improve influenza surveillance. PLOS Computational Biology, 11(10), October 2015. doi:10.1371/journal.pcbi.1004513.
-  Henry Scheffé. The analysis of variance. 1st edition edition, 1959.
-  Artem Sokolov, Daniel E. Carlin, Evan O. Paull, Robert Baertsch, and Joshua M. Stuart. Pathway-based genomics prediction using generalized elastic net. PLOS Computational Biology, 12(3), March 2016. doi:10.1371/journal.pcbi.1004790.
-  Tomaz Solc. Unidecode, 2018. URL: https://pypi.org/project/Unidecode/.
-  Galen Stocking and Katerina Eva Matsa. Using Google Trends data for research? Here are 6 questions to ask, April 2017. URL: https://medium.com/@pewresearch/using-google-trends-data-for-research-here-are-6-questions-to-ask-a7097f5fb526.
-  Ryan J. Tibshirani and Jonathan Taylor. The solution path of the generalized lasso. The Annals of Statistics, 39(3), June 2011. doi:10.1214/11-AOS878.
-  Wikipedia editors. Percent-encoding, April 2018. URL: https://en.wikipedia.org/w/index.php?title=Percent-encoding&oldid=836661697.