A little knowledge is a dangerous thing: excess confidence explains negative attitudes towards science

Scientific knowledge has been accepted as the main driver of development, allowing for longer, healthier, and more comfortable lives. Still, public support to scientific research is wavering, with large numbers of people being uninterested or even hostile towards science. This is having serious social consequences, from the anti-vaccination community to the recent "post-truth" movement. Such lack of trust and appreciation for science was first justified as lack of knowledge, leading to the "Deficit Model". As an increase in scientific information did not necessarily lead to a greater appreciation, this model was largely rejected, giving rise to "Public Engagement Models". These try to offer more nuanced, two-way, communication pipelines between experts and the general public, strongly respecting non-expert knowledge, possibly even leading to an undervaluing of science. Therefore, we still lack an encompassing theory that can explain public understanding of science, allowing for more targeted and informed approaches. Here, we use a large dataset from the Science and Technology Eurobarometer surveys, over 25 years in 34 countries, and find evidence that a combination of confidence and knowledge is a good predictor of attitudes towards science. This is contrary to current views, that place knowledge as secondary, and in line with findings in behavioral psychology, particularly the Dunning-Kruger effect, as negative attitudes peak at intermediate levels of knowledge, where confidence is largest. We propose a new model, based on the superposition of the Deficit and Dunning-Kruger models and discuss how this can inform science communication.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

05/01/2021

Science as a Public Good: Public Use and Funding of Science

Knowledge of how science is consumed in public domains is essential for ...
04/01/2020

Robots in the Danger Zone: Exploring Public Perception through Engagement

Public perceptions of Robotics and Artificial Intelligence (RAI) are imp...
07/25/2019

When Human-Computer Interaction Meets Community Citizen Science

Human-computer interaction (HCI) studies the design and use of interface...
02/12/2012

Citizen Science: Contributions to Astronomy Research

The contributions of everyday individuals to significant research has gr...
09/19/2019

The balance of knowledge flows

In analogy to the technology balance of payments, in this paper we propo...
11/17/2020

The COVID19 infodemic. The role and place of academics in science communication

As the COVID19 pandemic has spread across the world, a concurrent pandem...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Scientific research has been strongly supported by societies through agencies that channel public funds towards research grants and fellowships, under the assumption that science drives the “Knowledge-based Society”. This investment is dependent on public support Miller (2004) and, from the 1960’s onward, a number of surveys began to be applied trying to gauge both “hard knowledge” and the public’s attitude towards science and scientific discoveries Bauer et al. (2007); Bauer (2008). The surprising finding that some of the public was, not only unknowledgeable, but also disengaged or even actively hostile led to the establishment of a “Deficit Model”. In simple terms, this model claimed that public skepticism towards science was due to lack of understanding Wynne (1991) and that the more one knows about science, the more positive one’s attitude towards science is (“to know it is to love it”) Durant et al. (1989); Bauer et al. (2007). Its corollary was that experts and educators should engage with the ignorant public to improve their knowledge, directly leading to an improvement in support.

In the 1980’s this model, that can be crudely represented by the plot in Fig. 3A, started to face severe criticism for several reasons, and by the early 2000’s it had mostly been discredited House of Lords (2000); Miller (2001); Wynne (2001); Nisbet et al. (2002); Jasanoff (2003); Sturgis and Allum (2004); Bauer et al. (2007). First, the conception of a unidirectional communication between scientific experts and the community implied a disregard for the lay public’s views and has been replaced with a two-way stream of dialogue, debate, and discussion, leading to “Public-engagement" or “Interactive” models Miller (2001). Second, the definitions of both knowledge and attitude become more fluid: knowledge is no longer seen as simple textbook information that can be uniquely tested and assigned to a single variable Wynne (1992), and the notion of a single positive or negative “attitude” towards scientific subjects has been replaced with the possibility of nuanced “attitudes”, which can vary widely depending on the subject, question at hand, context, time Wynne (1991); Martin and Tait (1992); Evans and Durant (1995); Pardo and Calvo (2002), and even political identity Hamilton (2010); McCright (2010); Drummond and Fischhoff (2017). Third, there is growing evidence that offering information on controversial issues, or on issues where people hold strong prior beliefs, does not change people’s minds and can even backfire Gelder (2005); Gilovich et al. (2012), by polarizing opinions Hart and Nisbet (2011) or even by eroding trust in the scientific method itself Munro (2010).

Thus, while this relationship between knowledge(s) and attitude(s) has guided most of the discussion around science communication and public understanding of science in the past decade Allum et al. (2008), it is now clear that knowledge alone cannot fully predict attitudes Fischhoff and Scheufele (2014). However, when some knowledge and attitude variables can be identified, close re-examinations of survey data have confirmed that there is a central role to knowledge in the determination of attitudes: this role is much more complex than the linear relation purported by the “Deficit Model”, but it is real and in general there is a positive association between higher knowledge and an overall positive attitude Hayes and Tariq (2000); Pardo and Calvo (2002); Sturgis and Allum (2004); Bauer et al. (2007); Allum et al. (2008); Entradas (2015).

Interestingly, this correlation disappears when the subject is controversial and the respondent tends to be knowledgeable Allum et al. (2008). Offering “too easy” science texts might lead to overconfidence and underrate the need for experts Scharrer et al. (2017), and just searching for information online on one subject leads to people to overestimate their knowledge on an unrelated subject Fisher et al. (2015). Dunning and Kruger have shown that confidence grows faster than knowledge Kruger and Dunning (1999) and this effect might be relevant in the anti-vaccination movement, with surveyed “anti-vaxxers” overestimating their knowledge on autism, and overconfidence being largest for lowest knowledge bins Motta et al. (2018). Together, this suggests that confidence might play an important, while overlooked, role in modulating the relationship between knowledge and attitudes towards science.

In this work, we take advantage of 5 rounds of the Science and Technology Eurobarometer questionnaires, a dataset including 34 countries between 1989 and 2005, and ask whether confidence modulates public understanding of science. By analyzing the relation between knowledge (k), attitudes (att) and a new confidence variable (c), we find that there is a consistent and strong non-linear correlation between attitudes and knowledge, and that this relation can be explained by varying levels of confidence. We propose a new testable model and discuss how it can guide future research and interventions.

Ii Materials and Methods

Computations were performed using R 3.4.4, Microsoft Excel 16 and Wolfram Mathematica 10.

ii.1 Dataset

The Science and Technology Eurobarometer campaigns from 1989 and 2005 surveyed a total of 34 countries, including EU members, candidates at the time, and other European Economic Area (EEA) countries, totalling 84469 individual interviews Bauer et al. (2012). Unlike previous and subsequent campaigns, this set asked questions that tried to gauge bot knowledge and attitudes, in a consistent way. However, there were differences both in the questions asked and in the possible answers, and the main dataset results from an harmonization effort that took the November 1992 (EB 38.1) round as a base and identified similar variables in the remaining four rounds (see Table S1). The harmonization was performed by taking the variable in the 1992 Eurobarometer and identifying items with similar wordings on the other four campaigns. For simplicity, this harmonized dataset is referred to as the Eurobarometer dataset throughout the text.

ii.2 Attitude variables

In each Eurobarometer round, a number of questions regarding possible attitudes towards science were asked. For each item, the interviewee is asked to declare agreement or disagreement with a given statement. As stated above, the November 1992 (EB 38.1) round was chosen as a basis and similar variables (with almost identical wording) were identified in the remaining four rounds. Thus, the Eurobarometer dataset contains an intersection of the questions that were asked in each round and contains the 10 attitude variables, listed in Table S2, that are found in all rounds except, in some cases, 1989.

The possible answers to the attitude questions are also not consistent: 1) the “don’t know” option was always present but a neutral option such as “neither agree nor disagree” was only offered in 1989, 1992 and 2005; 2) the available options on the Likert scale were sometimes five and and others two, as shown in Table S3.

As these differences may have an impact on the respondents’ behaviour Pardo and Calvo (2002), we tested its impact in three different ways: 1) by treating all the categories in the Likert scale either separately or fusing them into less options (adding the “strongly agree” with the “agree to some extent” and the “disagree to some extent” with the “strongly disagree”); 2) by either including or disregarding the “neither agree nor disagree”; and 3) by either aggregating the “neither agree nor disagree” with the “dont’ know” answers, or by treating them separately. These alternatives make up for a total of six different approaches to the data. We performed many of the calculations that follow in all six ways in order to establish that the choice for a given approach does not significantly affect the results.

To obtain a measurement or a smaller set of measurements for attitude(s) towards science, we computed their

Spearman correlation matrix and performed a Principal Components Analysis (PCA) (see Figure

S3A). We found that the answers are mostly uncorrelated and that there is no single component explaining a large percentage of the variation. We describe these findings in greater detail in the main text and thus treat all attitude variables independently.

It is important to note that, regardless of the polarity of the questions, “Agree" and “Strongly Agree" answers are typically more prevalent than disagreement answers, a common effect, known as “acquiescence bias" Evans and Durant (1995); Meisenberg and Williams (2008). Therefore, in the results we focus particularly on the “Agree" answers, and these tend to show a stronger effect.

ii.3 Knowledge Variables

The Eurobarometer dataset includes 13 “true or false” questions, listed in Table S4, designed to assess knowledge on science related subjects, with a “don’t know” option always available.

Similarly to the attitude questions set, we tested independence by calculating Spearman correlation and by performing a PCA (see Figure S3B). We created a single knowledge variable, k, computed from the ratio of correct answers to the number of questions each individual was asked. Thus, a “don’t know” is considered equivalent to an incorrect answer as far as the measurement of knowledge is concerned.

ii.4 Confidence Measurement

The neutral and “don’t know” answers can offer a possible measure of confidence. We use the aggregates of the “neither agree nor disagree” and the “don’t know” answers to the attitude questions, to which we call “neutral" answers, and the “don’t know” answers to the knowledge questions as a measure of confidence. As before, this classification does not offer a direct measurement of confidence, but serves as a general indicator, when compared to the other variables.

ii.5 Mathematical Model

The Deficit Model (DM) can be represented as a linear relation between attitudes, att, and knowledge, k, of the form

(1)

with higher knowledge leading to a more positive attitude. However, from 1A, we can observe a quadratic relation between confidence, c, and knowledge. This relation (that has been reported for the Dunning-Krugger effect, D-K), can be derived directly from the curve and be written as

(2)

by fitting these curves we find that

(3)

The proposed model is obtained by multiplying these two relations, with the Deficit Model inverted for negative attitudes,

(4)

leading to an inverted-U shaped curve. Taking the confidence curve as an experimental result, better fits to the curves in each attitude item can be obtained by adjusting the and parameters in our representation of the Deficit Model.

Iii Results

Attitudes towards science

Public attitudes towards science depend on several factors and it is not clear how much of a role knowledge plays. By using a large-scale database we tested: 1) whether it is possible to define “attitude(s)” towards science, 2) whether these vary with knowledge, and 3) what modulates such variation.

We thus started by asking whether it is possible to identify single, or a small subset of attitudes towards science. We extended the work of Pardo and Calvo (2002) and included all Eurobarometers and countries, offering not only more data and statistical power, but also the possibility of comparing the results longitudinally.

First, we compared all attitude variables and found that they are weakly correlated (), with only two groups of variables with relatively higher correlations: one that might be associated with an optimistic attitude and another with overall distrust, as shown in Figure S1.

Second, we performed a PCA and found, as Pardo and Calvo (2002) before us, that this system does not justify the grouping of some attitudinal questions, as can be seen in Figure S3A. Indeed, the first and most significant principal component accounts for less than

of the variance and even the first 5 components only represent around

, with the last and less significant of 10 components still holding almost of the variance.

Third, a series of attempts at factor analysis did not identify any set of factors modelling the behaviour of the attitude variables.

Thus, we found no mathematical justification for the construction of an attitude scale or of a small set of scales. In fact, these attempts indicate that there is a high level of independence between the variables. Thus, all attitude variables are treated separately, in the rest of the work.

iii.1 Attitudes and Knowledge

In the surveys, respondents were asked to state whether 13 science-related statements were true or false. We started by testing independence and found that, similarly to the attitude questions, the knowledge answers are poorly correlated. However, this can be explained in great part by the fact that the questions have different difficulty levels, with some questions displaying a much higher number of correct answers. Also, contrary to the attitudes questions, the PCA reveals that the first component explains of the variance, with all components having the same sign, indicating that answering one question correctly, increases the likelihood of giving the right answer to other questions, as depicted in Figure S3B. In fact, the distribution of correct answers is approximately Normal, as expected (see horizontal axes distributions in Fig. 1).

Therefore, and as for the purposes of this project we were not so much interested in measuring individual knowledge as in finding relations between this measure and the identified attitudes, we created a single k variable, where k corresponds to the fraction of correct answers, from (no correct answers) to (all questions answered correctly).

When we plotted the different attitudes by knowledge, we found that they also vary differently. Table 1

shows the slopes and fit of the linear regressions for the proportion of “agreement” answers for all attitude questions. We find that while some have strong dependencies on knowledge (higher absolute slopes), either positive or negative, others are virtually independent (lower absolute slopes). Fig. 

2A and D show examples of the attitude questions that fall within each of these two groups (full results in Fig. S6).

Our analysis does not identify any interesting pattern, with both controversial and less controversial issues (from the possibility of harming animals in research to whether science makes our liver more interesting), being basically independent from k, and strong dependencies appearing in issues of faith and comfort.

“Agree” slope
att_comfort
att_natural_resources
att_faith
att_environ
att_research_animal
att_res_dangerous
att_interest
att_daily_life
att_fast
att_oppor
Table 1: Slopes of “agreement” linear regressions of attitude variables plotted against knowledge as measured by k variable.

These results, seem to support the current views that not only there is no single variable that describes a set of “attitudes” towards science, but also there is no simple relationship between such attitudes and knowledge. However, both ours and past analysis, have focused only on respondents that state either an agreement or a disagreement with the questions. And it has long been known that many people offer answers to survey questions when they are unknowleageable of the subject, and even when the subjects at hand are fictitious Bishop et al. (1986).

Therefore, we decided to study the impact of the “don’t know” and “neither agree nor disagree” answers in this context.

iii.2 Knowledge and Confidence

Figure 1: Density histogram of the distribution of respondents according to the fraction of correct answers and fraction of “don’t know” answers (panel A) or incorrect answers (panel B). The dotted and dashed lines are the linear and quadratic regressions, respectively. Bars on the axes show distributions for each variable, all in the same scale. These charts show how the fraction of “don’t know” answers decreases more rapidly than the increase in knowledge, evidence of overconfidence. If each respondent only answered to the questions to which they know the answer, then the curve in Panel A would follow the diagonal thin line and there would be no incorrect answers, a flat line at zero in Panel B. Instead, we see the lowest knowledge bins very close to this “ideal confidence” line with the highest levels of overconfidence in the intermediate knowledge bins, coinciding with the highest proportions of incorrect answers.

We started by analyzing the impact of the “don’t know” answers, in the knowledge questions, knowing that the fraction of correct answers varies with an approximately Normal distribution (Fig. 

1). The interesting question is whether there is variation in the ratio of wrong to “don’t know” answers as we propose that this variation might offer us a measure of confidence.

A perfectly rational individual would modulate their confidence on a specific subject to their knowledge on that subject. Therefore, a perfect match between how much one knows (k) and how much one thinks one knows (confidence) would lead to a complete absence of wrong answers, with respondents either answering correctly or selecting the “don’t know” option. In this case, as the percentage of correct answers increased from to , the number of “don’t know” answers would decrease symmetrically, creating a perfect diagonal. This line would intersect at on both axes (solid black line on Fig. 1A).

If the incorrect answers did not depend on either knowledge or confidence (for example, if wrong answers were caused by randomly distributed errors), they should vary linearly with k and we would also observe the ideal line shifting down by an amount equal to the average fraction of incorrect answers, intersecting the axes at lower values. However, if the incorrect answers are modulated by confidence, with individuals overestimating their knowledge, we should observe non-linear (non-diagonal) relationships. And if the number of wrong answers grows faster than the number of ”Correct" and “Don’t know” answers, this will be represented as a deviation from the diagonal towards a concave curve, and can be interpreted as the confidence growing faster than knowledge.

To study how confidence varies with k, we analyzed how the number of “don’t know” answers varies with the different k bins. This can be represented by the linear fit of the fraction of “don’t know” as a function of the fraction of correct answers per bin (dashed black line in Fig. 1A). Thus, we may use this deviation as a measure of overconfidence of the respondents.

As can be seen in Fig. 1A, the quadratic fit curve is indeed concave, suggesting that confidence tends to grow much faster than k.

An equivalent way of looking at these results is by plotting the incorrect answers as a function of correct answers, which we tentatively identify with overconfidence and knowledge, respectively. The results in Fig. 1

B clearly show how the probability of wrong answers is maximum in the intermediate levels of knowledge and not at the lower, as would be expected if overconfidence was evenly distributed.

As we are looking at the sum of all possibilities within the same k bin, over more than 80 000 questionnaires, this curve can appear both when the individuals have very similar behaviours or when we have different populations, with some populations displaying very low wrong to “don’t know” ratios and some displaying very high.

Therefore, we repeated this analysis for each of the 34 countries individually, and confirm that confidence grows faster than knowledge in all surveyed countries (Figs. S4 and S5). We also find some small but consistent differences between them as, with few exceptions, respondents from the most developed, and generally more educated countries (Norway, Switzerland, Denmark, Netherlands, West Germany) show the highest confidence gap, with the ratio of wrong to right answers in the low k bins, being over 50% (as gauged by the intersect of the linear fit in the y-axis).

This is suggestive of an effect similar to what has been observed by Dunning and Kruger in the USA Kruger and Dunning (1999), leading us to look for what effects this observed overconfidence may have on the attitude items.

iii.3 Attitudes and Confidence

Figure 2: Relative frequencies of agreement, disagreement and neutral stance for each knowledge category towards the statements “For me, in my daily life, it is not important to know about science” (upper row) and “Science & Technology are making our lives healthier, easier and more comfortable” (lower row), shown here as examples of two distinct behaviours of attitude variables. Upper row shows an example of an asymmetric behaviour of agreement and disagreement, with the distinct “inverted U” curve appearing in the negative attitude. Lower row shows an item with a mostly flat disagreement curve and monotonously crescent agreement curve. Shaded areas highlight the four consecutive knowledge bins with highest agreement in each attitude item.

As described in the methods, the different Eurobarometer surveys followed different policies, with some including the neutral “neither agree nor disagree”, and others only allowing the “don’t know” option. As others before us Pardo and Calvo (2002), we found that the sum of these two tends to be constant (a person that would respond “neither agree nor disagree” to a given item is likely to choose “don’t know” if the first option is not available). Thus, we used the sum of these two variables, generally calling them “neutral answers”, and compared their usage across all attitude variables. Respondents offer either “agree” or “disagree” answers in the large majority of instances, with neutral choices varying between and of the total answers. As it is possible that this variation stems from individual options, we looked at the correlation between people who tend to answer “don’t know” to the k questions and people who tend to offer neutral answers to the attitude questions. We controlled these relationships between attitudes and knowledge for education level and observed that the behaviour remains substantially the same.

As seen in Fig. 1A, the proportion of “don’t know” answers decreases more rapidly than the increase in correct answers (Fig. 1B) with the highest fractions of incorrect answers encountered in the mid k range and not in the lower k categories.

Similarly, we could expect the individuals in the lower k bins (who answered proportionately more “don’t know” to the k questions), to also offer more neutral answers to the attitude questions. This is indeed what we observe: the neutral answers have a sharp decline in the lower k bins in every single attitude item, with only small variations, and remains very close to zero, in the mid to high k bins, as exemplified in Fig. 2B and E.

We had observed that attitudes vary inconsistently with knowledge, with some having strong and others showing very little dependence with k. This was done by calculating the frequency of "agree" versus "disagree" answers, and disregarding the "don’t know" or "neither agree nor disagree" (neutral) options. When we now re-analyze this dependence, but including the neutral answers, we find not only different behaviours across different attitudes, but very asymmetrical effects between agreement and disagreement positions (Fig. 2). In fact, all previously linear relationships (Fig. 2A and D), now become quadratic, often displaying either “inverted U” shape curves (Fig. 2C) or asymptotic behaviour (Fig. 2F), especially in the agreement answers, as discussed in the methods.

Interestingly, by including the neutral answers in the analysis, this non-linear behaviour now appears in all attitude items, with the most negative attitudes appearing at intermediate levels of k, that also correspond to the highest confidence to k ratios. Shaded areas in Figs. 2 and S6 show where the four consecutive k bins with highest agreement are, allowing for a clear distinction between “inverted-U” and asymptotic curves. Therefore, attitudes are not independent of knowledge, as current theories defend, neither do they appear to be more negative in lower knowledge bins, as the Deficit Model would predict.

Many of the attitudes that can be identified as negative seem to be modulated by a combination of knowledge and confidence, as represented in Fig. 3. Therefore, we developed a simple mathematical model, that combines the linear relationship predicted by the Deficit Model and the quadratic relation observed from the curve in Fig. 1A, that confirms the Dunning-Kruger effect. This new model, that simply multiplies both relations (as described in the Methods), leads to an inverted-U shaped curve, observed in many of the negative attitude items, as shown in Fig. 3. Importantly, the attitude items have different dependencies on knowledge among them, even before accounting for neutral answers and the effect of confidence, and this can be easily modulated by changes in the fitting parameters.

Figure 3: Proposed model of the observed behaviour of negative attitudes towards science. The Deficit Model is shown on the left as a simple linear relationship between knowledge and (positive) attitudes, whereas the Dunning-Kruger effect model in the center is derived directly from the curve in Fig. 1A. The resulting inverted-U curve model on the right is the product of negative attitudes Deficit Model with the Dunning-Kruger confidence curve.

Iv Conclusions

Our work builds on the long-lasting and ongoing discussion of what are the best predictors of public attitudes towards science. By creating a dataset of several rounds of the Science and Technology Eurobarometers and analyzing the ratio of correct to incorrect answers to the knowledge questions, we found that this does not vary linearly, with the majority of incorrect answers appearing at intermediate levels of knowledge. Similarly, the number of neutral answers to the attitude items drops very fast, approaching its minimum for intermediate knowledge levels. Arguing that this variation in the number of neutral answers, both for the knowledge and the attitudes questions, can be used as a proxy for confidence, we found that 1) confidence grows much faster that knowledge, in line with previous works that identify the Dunning-Kruger effect as relevant in the anti-science movements Motta et al. (2018); Fernbach et al. (2019); 2) that the least positive attitudes are found for these high-confidence / average knowledge groups, creating an inverted U-curve; and 3) that public attitudes towards science can be explained by a non-linear combination of both knowledge (following from the Deficit Model) and confidence (following from the Dunning-Kruger effect), proposing a new theoretical model (Fig. 3C).

Interestingly, and contrary to the cited works Motta et al. (2018); Fernbach et al. (2019), the least positive attitudes are not found at the lowest k bins, and four non mutually exclusive possibilities can explain this difference. First, the anti-vaccine, GMO and climate change issues are highly controversial with polarized populations for or against it, while this is not the case for most of the attitudes tested in the Eurobarometer dataset. The respondents in Motta et al. (2018); Fernbach et al. (2019) have strong opinions and are likely to believe to be very well informed, while the respondents in this dataset are least confident in the low k bins. This is in line with the predictions of the Dunning-Kruger effect, as confidence peaks in the middle and not for low k. Second, these are also issues for which there are large amounts of false information circulating online. Therefore, strong advocates against GMOs, climate change and vaccines are likely to believe to be right. They might know of the scientific consensus and choose not offer it as the correct answer. Again, this is unlikely to be the case with the surveyed for these Eurobarometers. Third, there is a significant time gap between the different surveys. The last round of the Eurobarometer took place in 2005 and, although we do not see longitudinal differences, this dataset was built mostly before the wide expansion of the internet and of online social networks. It is easy to argue that this misinformation and polarization might be made worse by these recent technologies, with the creation of echo-chambers and information bubbles. These may limit the quantity, quality, and diversity of information accessible to the non-expert public, effectively creating large groups of misinformed citizens. And the politicizing of science together with an increase in political polarization Iyengar and Massey (2018) might deepen this divide even further.

It also important to note that, to our knowledge, the DK effect had not been consistently shown outside of the USA, and the most developed and educated countries seem to display larger confidence to k gaps. Therefore, it is possible that, if this Eurobarometer was to be repeated, we would observe an even larger gap between confidence and k, across countries, as the citizens become more connected and confident, and possibly an even stronger polarization in the answers to the attitude items. Thus, we argue that, despite its problems, a new round of this or a very similar survey is in order.

Taken together, our results have clear implications to current science communication strategies. Our model predicts that receptiveness to science will be stronger at the lowest and highest knowledge bins, where the C/K ratios are also lowest. Offering information that is incomplete, partial, or over simplified, as science communicators often do, might indeed backfire, as it may offer a false sense of knowledge to the public, leading to overconfidence, and less support.

In fact, if the lowest support for science comes from the over-confident, these might also be the ones more resistant to new information, especially if it contradicts their certainty, creating a negative reinforcement loop. This resistance to change has been shown in several behavioral psychology studies, and presented as cognitive biases, such as the confirmatory tendencies. Importantly, these intermediate k and high confidence bins, correspond to the majority of the individuals surveyed. This effect was not important in our analysis, as all bins were normalized by frequency, but is fundamental at a population level, as they are likely to correspond to a large group of European demographics.

If indeed negative attitudes can be explained by a combination of limited knowledge and excess confidence, developing science communication strategies that offer a good balance between sharing not only accurate and precise information, but also large doses of humility, both on the scientists and the lay public’s side, is likely to be a fundamental, while very difficult task. A multidisciplinary approach, building from cognitive and behavioral psychology, social media and complex systems analysis, should receive a new focus, so that we move from a post-truth world, by avoiding the dangers of the "little knowledge".

Acknowledgements.

The authors would like to thank Caetano Souto-Mayor, Michael West and João Nolasco for initial analysis of the dataset, members of the Data Science and Policy group for valuable discussions, and Tiago Paixão, Marta Entradas and Joana Lobo Antunes for critical reading of the manuscript. JGS was partially supported by Welcome DFRH WIIA 60 2011, co-funded by the FCT and the Marie Curie Actions.

References

  • Durant et al. (1989) J. R. Durant, G. A. Evans,  and G. P. Thomas, Nature 340, 11 (1989).
  • Bauer et al. (2007) M. W. Bauer, N. Allum,  and S. Miller, Public Understand. Sci. 16, 79 (2007).
  • Miller (2001) S. Miller, Public Understand. Sci. 10, 115 (2001).
  • Bauer et al. (2012) M. W. Bauer, S. R,  and K. P, Public understanding of science in Europe 1989-2005. A Eurobarometer trend file., Tech. Rep. (2012).
  • Miller (2004) J. D. Miller, Public Understand. Sci. 13, 273 (2004).
  • Bauer (2008) M. W. Bauer, in Handbook of public communication of science and technology (Routledge, 2008) pp. 111–130.
  • Wynne (1991) B. Wynne, Science, Technology, & Human Values 16, 111 (1991).
  • House of Lords (2000) House of Lords, Science and Society, Tech. Rep. (2000).
  • Wynne (2001) B. Wynne, Science as Culture 10, 445 (2001).
  • Nisbet et al. (2002) M. C. Nisbet, D. A. Scheufele, J. Shanahan, P. Moy, D. Brossard,  and B. V. Lewenstein, Communication Research 29, 584 (2002).
  • Jasanoff (2003) S. Jasanoff, Soc Stud Sci 33, 389 (2003).
  • Sturgis and Allum (2004) P. Sturgis and N. Allum, Public Understand. Sci. 13, 55 (2004).
  • Wynne (1992) B. Wynne, Public Understand. Sci. 1, 37 (1992).
  • Martin and Tait (1992) S. Martin and J. Tait, in Biotechnology in Public, edited by J. R. Durant (1992).
  • Evans and Durant (1995) G. Evans and J. Durant, Public Understand. Sci. 4, 57 (1995).
  • Pardo and Calvo (2002) R. Pardo and F. Calvo, Public Understand. Sci. 11, 155 (2002).
  • Hamilton (2010) L. C. Hamilton, Climatic Change 104, 231 (2010).
  • McCright (2010) A. M. McCright, Climatic Change 104, 243 (2010).
  • Drummond and Fischhoff (2017) C. Drummond and B. Fischhoff, Proc Natl Acad Sci USA 114, 9587 (2017).
  • Gelder (2005) T. v. Gelder, College Teaching 53, 41 (2005).
  • Gilovich et al. (2012) T. Gilovich, D. Griffin,  and D. Kahneman, eds., Heuristics and Biases: The Psychology of Intuitive Judgment, 1st ed. (Cambridge University Press, 2012).
  • Hart and Nisbet (2011) P. S. Hart and E. C. Nisbet, Communication Research 39, 701 (2011).
  • Munro (2010) G. D. Munro, Journal of Applied Social Psychology 40, 579 (2010).
  • Allum et al. (2008) N. Allum, P. Sturgis, D. Tabourazi,  and I. Brunton-Smith, Public Understand. Sci. 17, 35 (2008).
  • Fischhoff and Scheufele (2014) B. Fischhoff and D. A. Scheufele, Proc Natl Acad Sci USA 111, 13583 (2014).
  • Hayes and Tariq (2000) B. C. Hayes and V. N. Tariq, Public Understand. Sci. 9, 433 (2000).
  • Entradas (2015) M. Entradas, portuguese journal of social science 14, 71 (2015).
  • Scharrer et al. (2017) L. Scharrer, Y. Rupieper, M. Stadtler,  and R. Bromme, Public Understand. Sci. 26, 1003 (2017).
  • Fisher et al. (2015) M. Fisher, M. K. Goddu,  and F. C. Keil, Journal of Experimental Psychology: General 144, 674 (2015).
  • Kruger and Dunning (1999) J. Kruger and D. Dunning, Journal of Personality and Social Psychology 77, 1121 (1999).
  • Motta et al. (2018) M. Motta, T. Callaghan,  and S. Sylvester, Social Science & Medicine 211, 274 (2018).
  • Meisenberg and Williams (2008) G. Meisenberg and A. Williams, Personality and Individual Differences 44, 1539 (2008).
  • Bishop et al. (1986) G. F. Bishop, A. J. Tuchfarber,  and R. W. Oldendick, Public Opinion Quarterly 50, 240 (1986).
  • Fernbach et al. (2019) P. M. Fernbach, N. Light, S. E. Scott, Y. Inbar,  and P. Rozin, Nat Hum Behav 11, 193 (2019).
  • Iyengar and Massey (2018) S. Iyengar and D. S. Massey, Proc Natl Acad Sci USA 13, 201805868 (2018).

Appendix A Supplementary Data

Candidate
Round Eurobarometer Eurobarometer Eurobarometer Country EB Eurobarometer
31 38.1 55.2 2002.3 63.1
Dates Mar-Apr 1989 Nov 1992 May-Jun 2001 Oct-Nov 2002 Jan-Feb 2005
1 France -
2 Belgium -
3 Netherlands -
4 West Germany -
5 Italy -
6 Luxembourg -
7 Denmark -
8 Ireland -
9 Great Britain -
10 Northern Ireland -
11 Greece -
12 Spain -
13 Portugal -
14 East Germany - -
15 Finland - -
16 Sweden - - -
17 Austria - - -
18 Cyprus - - -
19 Czech Republic - - -
20 Estonia - - -
21 Hungary - - -
22 Latvia - - -
23 Lithuania - - -
24 Malta - - -
25 Poland - - -
26 Slovakia - - -
27 Slovenia - - -
28 Bulgaria - - -
29 Romania - - -
30 Turkey - - -
31 Iceland - - - -
32 Croatia - - - -
33 Switzerland - - - -
34 Norway - - - -
Total 13 14 17 13 34

Table S1: List of Eurobarometer rounds used to compile the harmonized dataset from Ref. Bauer et al. (2012), used in this paper. EB 38.1 was used as a reference for the identification of similar variables to construct the harmonized dataset, with countries surveyed in each Science and Technology Eurobarometer round.
Long Code Statement
att_comfort “Science & Technology are making our lives healthier, easier and more comfortable.”
*att_natural_resources “Thanks to scientific and technological advances, the earth’s natural resources will be inexhaustible.”
att_faith “We depend too much on science and not enough on faith”
*att_environ “Scientific and technological research cannot play an important role in protecting the environment and repairing it.”
*att_research_animal “Scientists should be allowed to do research that causes pain and injury to animals like dogs and chimpanzees if it can produce information about human health problems.”
*att_res_dangerous “Because of their knowledge, scientific researchers have a power that makes them dangerous.”
*att_interest “The application of science and new technology will make work more interesting.”
*att_daily_life “For me, in my daily life, it is not important to know about science.”
att_fast “Science makes our way of life change too fast.”
*att_oppor “Thanks to science and technology, there will be more opportunities for the future generations.”
Table S2: Set of 9 attitude variables in the Eurobarometer dataset. For each statement respondents were asked to state their agreement or disagreement. Starred items (*) do not have data for 1989.
EB 31 EB 38.1 EB 55.2 Candidate EB 2002.3 EB 63.1
Mar-Apr 1989 Nov 1992 May-Jun 2001 Oct-Nov 2002 Jan-Feb 2005
Strongly agree - -
Agree to some extent
Neither agree nor disagree - -
Disagree to some extent
Strongly disagree - -
Don’t know

Table S3: Available answers for attitude items in each Eurobarometer campaign contained in the dataset.
Figure S1: Spearman correlation matrix of attitude variables, showing their weak correlations and ordered to show the also weak clusters.
Figure S2: Spearman correlation matrix of knowledge variables, showing their fairly weak correlations and ordered to show the also weak clusters.
Code Question Answers
k_earth “The centre of the Earth is very hot.” *“True” or “False”
k_oxygen “The oxygen we breathe comes from plants.” *“True” or “False”
k_milk “Radioactive milk can be made safe by boiling it.” “True” or *“False”
k_electron “Electrons are smaller than atoms.” *“True” or “False”
k_continents “The continents on which we live have been moving their location for million of years and will continue to move in the future.” *“True” or “False”
k_gene “It is the father’s gene which decides whether the baby is a boy or a girl.” *“True” or “False”
k_dinosaurs “The earliest humans lived at the same time as the dinosaurs.” “True” or *“False”
k_antibiotics “Antibiotics kill viruses as well as bacteria.” “True” or *“False”
k_lasers “Lasers work by focusing sound waves.” “True” or *“False”
k_radioactivity “All radioactivity is man-made.” “True” or *“False”
k_human “Human beings, as we know them today, developed from earlier species of animals.” *“True” or “False”
k_sun “Does the earth go around the sun or does the sun go around the earth?” “The sun goes around the earth” or *“The earth goes around the sun”
k_time “How long does it take for the earth to go around the sun?” *“Year” or “Month”
Table S4: Set of 13 knowledge variables in the Eurobarometer dataset, with question statement and possible answers; A “don’t know” option was also available in each question. The correct answer is starred (*).
Figure S3: Proportion of variance for each principal component resulting from the PCA ran on the knowledge and attitude variables. (A) Attitude variables PCA, with full line for binning of answers into positive, negative and neutral, other binning methods as superimposed dotted lines. There is a slow and steady decline in the proportion of variance throughout, with the first few principal components failing to provide a large enough proportion of the total variance to be useful. (B) Knowledge variables PCA, with full line considering the aggregation of incorrect and “don’t know” answers and dotted line keeping them distinct. The first principal component accounts for a significantly larger part of the total variance and its coefficients all have the same sign.
Figure S4: Fits of the distribution of respondents according to the fraction of correct answers and fraction of “don’t know” answers by country. The dotted and dashed lines are the linear and quadratic regressions, respectively. Compare with Fig. 1A.
Figure S5: Fits of the distribution of respondents according to the fraction of correct answers and fraction of wrong answers by country. The dotted and dashed lines are the linear and quadratic regressions, respectively. Compare with Fig. 1B.
Figure S6: Relative frequencies of agreement, disagreement and neutral stance for each knowledge category towards the remaining attitude items analyzed, with and without the inclusion of neutral answers. Shaded areas highlight the four consecutive knowledge bins with highest agreement in each attitude item. Curve fit equations on Tables S5 and S6.
Table S5: Linear and quadratic fit equations for agreement and disagreement curves as a function of knowledge for each attitude item when neutral answers are not considered.
Table S6: Linear and quadratic fit equations for agreement and disagreement curves as a function of knowledge for each attitude item when neutral answers are considered.