Today, children are spending more time online than with other media sources, such as televisions or offline video games [11, 8]. Among the many kinds of devices now connected to the Internet, mobile devices (such as tablet computers or smartphones) have become the primary means by which children go online . In the UK, 35% of children aged five to seven and 52% of children aged eight to eleven have been reported to have their own tablets (see Figure 1) ; while in the US, ownership of tablets by children in this age group grew five-fold between 2011 and 2013 . Children under five are also using smartphones and tablets more often, as the category of apps designed for younger kids continues to expand rapidly [11, 3].
However, for children aged between 6 and 10, their awareness of online privacy has been seldom studied in existing literature. It’s probably due to the expectation that children of this age cannot comprehend the concept of privacy, and therefore their online safety and privacy should be safeguarded by the trusted grown-ups. However, our previous reports[16, 15] have shown that although parents largely take a protective approach to restrict and monitor what their children can or cannot access on their children’s devices, parents do not always fully comprehend all risks. Furthermore, a more recent study led by the LSE has shown that parents struggle to seek for help to mediate their children’s use of digital devices, and they largely rely on themselves to figure things out .
Therefore, the goal of the KOALA project is to, through a series of studies directly interacting with children and their families, to achieve a deeper understanding of the following questions:
How well can children aged 6-10 understand online privacy risks related to their use of tablet computers?
What are the barriers for their parents to mediate children’s use of technologies and recognition of risks?
Between June and August 2018 we conducted 12 focus group interviews with 29 children, aged 6-10, from UK schools. We found that children in our study had a good understanding of risks related to inappropriate content, the approach of strangers, and oversharing of personal information online. However, they struggled to fully understand and describe risks related to online game/video promotions and personal data tracking. Moreover, children’s risk coping strategies depended on their understanding of the risks and their previous experiences: effective risk strategies were applied only if children recognised certain risks or when they felt something untoward. These findings demonstrate the importance of learning about potential risks through a multitude of channels, such as school, parents, friends, siblings, and personal experiences [13, 5].
We chose the focus group method to elicit children’s responses to a collection of hypothetical scenarios that reflect different types of explicit and implicit threats to children’s online personal data privacy.
Each focus group study contained four parts, including a warm-up and introduction session, a sharing of favourite apps, a walk-through of three hypothetical risk scenarios, and finally an open-ended session about issues not so far discussed.
Children were encouraged to share their personal experience related to the scenarios, by responding to questions like whether this has happened to them and what they did. This enabled us to listen to children’s descriptions of their experiences not covered in the scenarios.
The hypothetical scenarios used in the study have been created into story cards that can be reused and shared in the classrooms or at home. They can be found in the Appendix towards the end of the report, accompanied by descriptions about the privacy risks related to each card. Parents and teachers can also download our story cards from our project web site and use these cards to inspire conversations with children about online privacy issues.
3 Key Findings
Through careful data analysis of 20-hour interview recordings, we have achieved some positive findings with respect to children’s level of awareness. However, at the same time, we have also identified critical gaps in children’s online privacy knowledge. Children sometimes may be able to capture certain risks with the right language, however, they may not have fully comprehended the risks. These are critical insights that should draw further attention to families and schools when applying risk mediation to children. The terms used by the participant children and their abilities to apply these terms to different types of risk context has provided valuable inputs to our future design of technologies to facilitate children’s risk coping abilities.
3.1 Children care about online privacy risks
Our results reinforced that, like teenagers and younger children, our participants valued the positive experiences of being online and keeping their personal space online . They enjoyed going online so that they could keep in touch with friends, learn new things or just have fun.
At the same time, children felt ‘annoyed’, ‘surprised’ or ‘angry’ when they felt coerced or not in control. As a result, children cared about, and were sensitive to, who might access their sensitive information (e.g. real names, age, location etc), and applied a range of techniques to safeguard this space, such as by verifying identities through face-to-face interactions or avoiding using real names as usernames.
3.2 Children still need help with certain privacy risks
Children demonstrated a strong consciousness of their online identity and the importance of avoiding sharing their real identity or over-sharing their personal information. In these cases, children applied various effective strategies to protect their sensitive personal information, such as ‘making up a new name’, and asking their friend to verify the online invitation in person. However, in contrast to explicit privacy risks, children struggled to associated “online promotions” with losing control of personal information.
Several children correctly discussed their interpretation of how YouTubers may try to ‘persuade’ them to watch their videos in order to gain ‘money’ or ‘more subscribers’. However, none of these children expressed resistance to these video promotions. They treated it as if this is how the Internet works.
Only a few children (3/12 focus groups) recognised how new videos might be related to personalised recommendations. However, they struggled to understand who was performing these recommendations, and as a result, they were less sure about the consequences of their privacy. As a result, the children usually applied the play-and-see strategy to assess the content or apps they came across, without any more complex reasoning.
3.3 Children’s ability to describe risks depends on their actual understanding of risks
Our study is partially inspired by Vygotsky’s Zone of Proximal Development (ZPD), and therefore, we really focused on understanding children’s current knowledge by observing the language they used to describe risks. We observed that some terms were repeatedly used by children across different situations — children were able to describe risks accurately when they could recognise the actual risks. However, when they had only a vague understanding of the risks, they struggled to describe things consistently or to provide a good explanation of what they meant.
When children only made a partial sense of certain types of risks, their use of terms can be inconsistent. For example, although some children were able to describe the scenario of new videos being presented to Bertie using words like ‘people trying to make money’ or ‘them trying to make you watch more’, they struggled to explain who these ‘people’ are and how information might be transmitted to these ‘[YouTube] channel people’ in this context.
Another example is the term ‘hacking’, which has been used by children across different focus groups, but in fact with very different meanings. For example, when trying to explain why a new video was shown following a previous video, children used ‘hacking’ to mean ‘someone stole my data’, ‘take your account’, or ‘steal from house’. ‘Hacking’ has been used by the same group of children to make sense of other scenarios, including in-app pop-ups or data tracking.
3.4 Children struggle to comprehend the implications of risks
Children could miss risks due to a lack of knowledge, or due to their past experiences, which did not lead to any direct consequences related to the risks. For those children who interacted with certain technologies and experienced no implications before, they would tend to be more (over-) confident with technologies.
Children from 7 focus groups treated online video promotions as part of an ‘autoplay’ function of the platform, without questioning how the new content might be presented to them. 12/29 children demonstrated trust in the content provided by their familiar YouTubers. As a result, children reported having been exposed to unexpected content and online baiting. These same children reported that they often saw upsetting content online (e.g. ‘sometimes in autoplay, it comes up with these really freaky ones like pictures of dead people’).
When children not fully comprehending the risks, we also observed that a child’s or their peers’ personal experiences had a strong influence upon their decision making, even though they didn’t always understand what may pose threats to them.
We thank all the schools, families and children who have contributed their time and knowledge to our study. Without their support, this study would not have been successful!
The report is based on a full academic publication ‘I make up a silly name’: Understanding Children’s Perception of Privacy Risks Online, as part of the project of KOALA (https://sites.google.com/view/koala-project-ox/): Kids Online Anonymity & Lifelong Autonomy, funded by EPSRC Impact Acceleration Account Award, under the grant number of EP/R511742/1.
Jun Zhao: firstname.lastname@example.org
Nigel Shadbolt: email@example.com
Appendix A Three Hypothetical Risk Scenarios
Our scenarios were carefully designed to contrast explicit versus implicit data collection in a familiar versus unfamiliar technology context. Previous research has shown that children under 11 particularly struggle to understand risks posed by technologies or comprehend the context of being online [14, 6]. They have explored how children responded to explicit data requests such as in-app pop-ups; while our study also looks into children’s awareness and perception of implicit data access through third-party tracking, which leads to personalised online promotions.
Story 1 — Implicit video promotions are widely found in applications like social video sharing platforms, which can be based on personal viewing history or a viewers’ interests. Online promotions on such platforms are a significant means by which children discover games or video channels, material that is not always appropriate for their age or developmental needs . Therefore, story 1 aimed to assess how much children are aware of the video promotion behaviours of online platforms, some of which could be based on children’s online activities, including the videos they have watched or the games they enjoy playing.
Story 2 — In-app pop-ups can explicitly prompt children for personal information (such as names, age or voices) before they can continue with the game. Story 2 aimed to assess children’s awareness of explicit stranger danger and in-app game promotions, which can be personalised based on their online data.
Story 3 — The large number of applications (‘apps’) that can be downloaded for free are a major way by which children interact with these devices. Currently these ‘free’ apps are largely supported by monetisation of user’s personal information [4, 7]. A large amount of personal information and online behaviour may be collected from children’s apps and shared with third party online marketing and advertising entities . This scenario is designed to examine how children perceive and feel about these risks.
-  2013. Zero to Eight: Children’s Media Use in America 2013. Technical Report. Common Sense Media. https://www.commonsensemedia.org/research/zero-to-eight-childrens-media-use-in-america-2013
-  2019. Must Have Apps for Kids Under Five. https://www.lifewire.com/great-apps-for-kids-5-and-under-4152990. (2019). Accessed: 2018-12-15.
-  Alessandro Acquisti, Curtis R Taylor, and Liad Wagman. 2016. The economics of privacy. Journal of Economic Literature 52, 2 (2016).
-  Yasmeen Hashish, Andrea Bunt, and James E Young. 2014. Involving children in content control: a collaborative and education-oriented content filtering approach. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 1797–1806.
-  Priya Kumar, Shalmali Milind naik, Utkarsha Ramesh Devkar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2018. No Telling Passcodes Out Because They’re Private?: Understanding Children’s Mental Models of Privacy and Security Online. In Proceedings of ACM Human-Computer Interaction (CSCW ’18 Online First). ACM.
-  Michael E Kummer and Patrick Schulte. 2016. When private information settles the bill: Money and privacy in Google’s market for smartphone applications. ZEW-Centre for European Economic Research Discussion Paper 16-031 (2016).
-  Sonia Livingstone, Julia Davidson, Joanne Bryce, Saqba Batool, Ciaran Haughton, and Anulekha Nandi. 2017. Children’s online activities, risks and safety: a literature review by the UKCCIS evidence group. Technical Report. UKCCIS evidence group.
-  Sonia Livingstone and Kjartan Olafsson. 2018. When do parents think their child is ready to use the internet independently? Technical Report. LSE.
-  May O Lwin, Andrea JS Stanaland, and Anthony D Miyazaki. 2008. Protecting children’s privacy online: How parental mediation strategies affect website safeguard effectiveness. Journal of Retailing 84, 2 (2008), 205–217.
-  ofcom.org. 2017. Children and parents: media use and attitude report. (2017). https://www.ofcom.org.uk/__data/assets/pdf_file/0020/108182/children-parents-media-use-attitudes-2017.pdf
-  Irwin Reyes, Primal Wiesekera, Abbas Razaghpanah, Joel Reardon, Narseo Vallina-Rodriguez, Serge Egelman, and Christian Kreibich. 2017. “Is Our Children’s Apps Learning?” Automatically Detecting COPPA Violations. In Workshop on Technology and Consumer Protection (ConPro 2017), in conjunction with the 38th IEEE Symposium on Security and Privacy.
-  Leah Zhang-Kennedy, Yomna Abdelaziz, and Sonia Chiasson. 2017. Cyberheroes: The design and evaluation of an interactive ebook to educate children about online privacy. International Journal of Child-Computer Interaction (2017).
-  Leah Zhang-Kennedy, Christine Mekhail, Yomna Abdelaziz, and Sonia Chiasson. 2016. From Nosy Little Brothers to Stranger-Danger: Children and Parents’ Perception of Mobile Threats. In Proceedings of the The 15th International Conference on Interaction Design and Children. ACM, 388–399.
-  Jun Zhao. 2018. Are Children Well-Supported by Their Parents Concerning Online Privacy Risks, and Who Supports the Parents? CoRR abs/1809.10944 (2018). http://arxiv.org/abs/1809.10944
-  Jun Zhao, Ulrik Lyngs, and Nigel Shadbolt. 2018. What privacy concerns do parents have about children’s mobile apps, and how can they stay SHARP? CoRR abs/1809.10841 (2018). http://arxiv.org/abs/1809.10841