Is your chatbot GDPR compliant? Open issues in agent design

05/26/2020
by   Rahime Belen Saglam, et al.
University of Kent
0

Conversational agents open the world to new opportunities for human interaction and ubiquitous engagement. As their conversational abilities and knowledge has improved, these agents have begun to have access to an increasing variety of personally identifiable information and intimate details on their user base. This access raises crucial questions in light of regulations as robust as the General Data Protection Regulation (GDPR). This paper explores some of these questions, with the aim of defining relevant open issues in conversational agent design. We hope that this work can provoke further research into building agents that are effective at user interaction, but also respectful of regulations and user privacy.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

04/09/2021

Exploring Current User Web Search Behaviours in Analysis Tasks to be Supported in Conversational Search

Conversational search presents opportunities to support users in their s...
02/24/2019

Designing for Health Chatbots

Building conversational agents have many technical, design and linguisti...
04/18/2018

Smart Conversational Agents for Reminiscence

In this paper we describe the requirements and early system design for a...
05/11/2021

Diplomat: A conversational agent framework for goal-oriented group discussion

Recent work in human-computer interaction has explored the use of conver...
06/18/2015

Emergence of synchrony in an Adaptive Interaction Model

In a Human-Computer Interaction context, we aim to elaborate an adaptive...
06/14/2021

Communication is the universal solvent: atreya bot – an interactive bot for chemical scientists

Conversational agents are a recent trend in human-computer interaction, ...
04/07/2022

Conversational agents for fostering curiosity-driven learning in children

Curiosity is an important factor that favors independent and individuali...

1. Introduction

Conversational agents are used in various contexts including the home, in finance, health care and tourism. In each context, personal information is increasingly collected and processed in order to provide more effective and personalized services. Personalization allows a chatbot to be aware of situations and to dynamically adapt its interaction to better suit user needs (neururer2018perceptions, ; chaves2019should, ). Although deeper disclosures of personal information can increase the effective use of these agents, an intriguing—and thus far unaddressed—tension arises between the need for sharing such information (in one-off situations, or for personalized interactions) and privacy (a primary goal of regulations such as the GDPR). These points call attention to the question: is it possible to develop ultimately useful, GDPR-compliant (and thus privacy-aware) agents?

2. Principles, lawful bases and rights

The EU’s GDPR is one of the most robust regulations for data protection that the world has ever seen. It defines principles and the lawful bases for processing personal information, and also specifies rights for individuals (gdpr, ). Below, we introduce a relevant subset of these which are then further explored in Section 3 to highlight key open issues in conversational agent practice and research.

Principles:

  • Transparency: requires data controllers to be clear, open and honest about how they process personal data.

  • Data minimization: requires data controllers ensure that personal data processed is adequate, relevant and limited to what is necessary in relation to the processing purpose.

  • Purpose limitation: requires personal data be collected for specified, explicit and legitimate purposes and not further processed.

  • Storage limitation: dictates that data controllers must delete personal data when it is no longer needed.

Lawful basis for processing:

  • Consent: requires data controllers to obtain explicit consent from the data subject for the processing of any personal data; this can be withdrawn at any time.

  • Legitimate interest: one of the cases where data controllers do not need to obtain consent is when they have a legitimate need and can show that the processing is necessary to achieve it.

  • Special category data: requires controllers to apply a higher level of protection for special categories of personal data (racial or ethnic origin, health data, political opinions etc.).

Individual rights:

  • Right to be informed: allows individuals to know what is being done with their information, and thus links to transparency.

  • Right of access: allows data subjects to ask for a copy of their personal data, the purposes of processing their data, the categories of the data being processed, and the third parties or categories of third parties that will receive their data.

  • Right to rectification: requires data controllers to rectify or erase inaccurate or out of date information.

  • Right to erasure: also known as the right to be forgotten, mandates that controllers delete data in certain cases if there is no longer a lawful basis for processing or if the data subject withdraws consent.

3. Open issues in agent design

Even though GDPR and its implications have been widely covered in different contexts such as cloud computing, internet-of-things and blockchain technologies, it is surprising to see that very limited emphasis has been placed on potential design/implementation issues in the chatbot context. Even in those that touch on GDPR compliance (peras2018chatbot, ; skjuve2018chatbots, ), discussions are severely limited. Below, we explore key conflicts and open issues in conversational agent design.

3.1. How to build honest and open chatbots?

Firstly, a lack of algorithmic transparency is a major barrier for GDPR compliance in chatbots. Efforts towards making users more aware of how their personal information is processed are present but are rather constrained in scope (neururer2018perceptions, ; lai2018banking, ). This limitation also becomes a challenge for data subjects when attempting to exercise their right to be informed. Providing transparency becomes of the utmost importance for companies in the finance and health sectors, which provide personalised chatbots heavily processing sensitive or personally identifiable information. The issue here, therefore, is how is transparency best achieved in chatbot design, and how should users be kept informed about how their data is used?

Another relevant right, the right of access, introduces key open issues since it is not clear how agents should/could provide access to the personal information they hold. Meeting this requirement is directly related to the accuracy of the agent in processing the conversations. Dissimilar to traditional applications that use relational databases, an agent has to process and extract personal information from a dialog. The risks are two-fold; the agent might not be able to extract the personal information or it may not process it accurately due to a failure in processing the text or voice. Those risks may undermine the trust between the agent and the user, and also make it very complicated for users to exercise their right to rectification. Providing the entire conversation upon request to access personal data may be an option but it is not at all user friendly.

3.2. How to design consent practices?

How to manage consent in chatbot applications gives rise to some other significant questions. One possible approach for gathering explicit consent for chatbots is assuming the activity of using a chatbot is innately giving consent. However, this will not meet the standard of an unambiguous indication by clear affirmative action in GDPR (see Article 4). It is possible to require users to ‘sign’ a contract to obtain consent, or to gather it at the beginning of the conversation. In the latter option, it is difficult to judge the potentially negative impact on user-agent experience. Such kinds of formal and unusual treatment of language may fit in some use cases like finance applications well, however it may undermine the acceptance of a therapist chatbot, for instance. Accuracy of the agent to process the response of users could be another challenge. For example, if instead of giving the simple answer, “yes, I consent” or “I do not”, they say “ok, I consent but you cannot process my secrets about my family especially my husband!”. Then, the approach adopted by traditional applications (mobile apps, websites etc.), where individuals are asked to actively opt in to consent, might be an option for chatbots as well. This needs to be explored.

3.3. Personalised chatbots vs. the right to be forgotten and the storage limitation principle

Compliance is also challenged by difficulties in guaranteeing right to erasure. Even though it could be technically straightforward to delete the previous conversations of a user, this will make personalisation impossible and undermine effective use of agents. For instance, Amazon Alexa allows deletion of voice recordings but also informs their users about potential problems: “Voice recordings are used to improve the accuracy of your interactions with Alexa. Deleting voice recordings associated with your account may degrade your experience” (amaz, ). A better solution could be deletion of personal data that seemed to be too sensitive to users after a dialog. However, questions then arise about how to request such a deletion from a chatbot and whether deletion should cover the entire two-way conversation. For example should agents support requests like, “forget everything I told you after arguing with my boss”, “forget my ethnic origin” or “don’t store conversations we have about my mental health”? Anonymisation techniques could be applied as done in several domains to comply with GDPR, however, there are likely to be additional challenges in assuring anonymisation given the more fluid nature of conversational data.

The storage limitation principle could also be of concern for conversational agents. For a mobile application, it is reasonable to argue that the personal information of a user becomes useless when they deregister an application and there is no reason to keep it. However, in AI-based, intelligent applications like chatbots, one possible counterargument is that processing is necessary for the legitimate interests of the data controller (e.g., a chatbot developer). The controller needs this information to train the agent which could be argued as a lawful basis to keep the data. However, this approach may not be inline with the purpose limitation principle which prevents personal data to be processed further for a new purpose that is not compatible with the original one. Anonymisation by removing the identifiers of a person after a period of time may be the optimal solution for compliance. However, the exact meaning of ‘erasure’ is ambiguous for this solution. It is possible to argue that erasure refers to an outright deletion of entire conversations considering the possibility of identifying a data subject by other personal information they shared in the messages. How should this work, therefore?

3.4. How to handle unneeded personal information?

The interactive and conversational nature of chatbots also provides a challenge for the data minimisation principle. A chatbot may end up processing several items of sensitive personal information even if it does not expect (nor has it asked for) it. For instance, a user may prefer to disclose their ethnic origin while answering the questions of an agent about their stress level intentionally or unintentionally. Or, in a finance case, they may disclose account number and PIN to get an account balance (see (cnetamazon, ; guardiangoogle, )). In such a case, in theory, it may be expected for an agent to avoid storing this information. Those inputs will inevitably be processed to generate an appropriate response. It is hard to find the correct strategy for a chatbot so that it can still give reasonable replies to the user and fully respect their privacy at the same time, especially in cases where special categories of personal data (e.g., ethnicity or sexual orientation) are processed. It may be technically possible to avoid asking sensitive questions, however, we should keep in mind that the answer may still expose sensitive information. How, therefore, should agents be designed to cater for such eventualities?

In summary, while there is a need for personal user information to design and develop chatbots, it is also important to consider principles, lawful bases and rights under regulations such as the GDPR. This is clearly an area in need of more research as we, as a society, attempt to balance the advantages of agents with the need for privacy and the respective data protection laws and regulations.

References