Conversational agents are used in various contexts including the home, in finance, health care and tourism. In each context, personal information is increasingly collected and processed in order to provide more effective and personalized services. Personalization allows a chatbot to be aware of situations and to dynamically adapt its interaction to better suit user needs (neururer2018perceptions, ; chaves2019should, ). Although deeper disclosures of personal information can increase the effective use of these agents, an intriguing—and thus far unaddressed—tension arises between the need for sharing such information (in one-off situations, or for personalized interactions) and privacy (a primary goal of regulations such as the GDPR). These points call attention to the question: is it possible to develop ultimately useful, GDPR-compliant (and thus privacy-aware) agents?
2. Principles, lawful bases and rights
The EU’s GDPR is one of the most robust regulations for data protection that the world has ever seen. It defines principles and the lawful bases for processing personal information, and also specifies rights for individuals (gdpr, ). Below, we introduce a relevant subset of these which are then further explored in Section 3 to highlight key open issues in conversational agent practice and research.
Transparency: requires data controllers to be clear, open and honest about how they process personal data.
Data minimization: requires data controllers ensure that personal data processed is adequate, relevant and limited to what is necessary in relation to the processing purpose.
Purpose limitation: requires personal data be collected for specified, explicit and legitimate purposes and not further processed.
Storage limitation: dictates that data controllers must delete personal data when it is no longer needed.
Lawful basis for processing:
Consent: requires data controllers to obtain explicit consent from the data subject for the processing of any personal data; this can be withdrawn at any time.
Legitimate interest: one of the cases where data controllers do not need to obtain consent is when they have a legitimate need and can show that the processing is necessary to achieve it.
Special category data: requires controllers to apply a higher level of protection for special categories of personal data (racial or ethnic origin, health data, political opinions etc.).
Right to be informed: allows individuals to know what is being done with their information, and thus links to transparency.
Right of access: allows data subjects to ask for a copy of their personal data, the purposes of processing their data, the categories of the data being processed, and the third parties or categories of third parties that will receive their data.
Right to rectification: requires data controllers to rectify or erase inaccurate or out of date information.
Right to erasure: also known as the right to be forgotten, mandates that controllers delete data in certain cases if there is no longer a lawful basis for processing or if the data subject withdraws consent.
3. Open issues in agent design
Even though GDPR and its implications have been widely covered in different contexts such as cloud computing, internet-of-things and blockchain technologies, it is surprising to see that very limited emphasis has been placed on potential design/implementation issues in the chatbot context. Even in those that touch on GDPR compliance (peras2018chatbot, ; skjuve2018chatbots, ), discussions are severely limited. Below, we explore key conflicts and open issues in conversational agent design.
3.1. How to build honest and open chatbots?
Firstly, a lack of algorithmic transparency is a major barrier for GDPR compliance in chatbots. Efforts towards making users more aware of how their personal information is processed are present but are rather constrained in scope (neururer2018perceptions, ; lai2018banking, ). This limitation also becomes a challenge for data subjects when attempting to exercise their right to be informed. Providing transparency becomes of the utmost importance for companies in the finance and health sectors, which provide personalised chatbots heavily processing sensitive or personally identifiable information. The issue here, therefore, is how is transparency best achieved in chatbot design, and how should users be kept informed about how their data is used?
Another relevant right, the right of access, introduces key open issues since it is not clear how agents should/could provide access to the personal information they hold. Meeting this requirement is directly related to the accuracy of the agent in processing the conversations. Dissimilar to traditional applications that use relational databases, an agent has to process and extract personal information from a dialog. The risks are two-fold; the agent might not be able to extract the personal information or it may not process it accurately due to a failure in processing the text or voice. Those risks may undermine the trust between the agent and the user, and also make it very complicated for users to exercise their right to rectification. Providing the entire conversation upon request to access personal data may be an option but it is not at all user friendly.
3.2. How to design consent practices?
How to manage consent in chatbot applications gives rise to some other significant questions. One possible approach for gathering explicit consent for chatbots is assuming the activity of using a chatbot is innately giving consent. However, this will not meet the standard of an unambiguous indication by clear affirmative action in GDPR (see Article 4). It is possible to require users to ‘sign’ a contract to obtain consent, or to gather it at the beginning of the conversation. In the latter option, it is difficult to judge the potentially negative impact on user-agent experience. Such kinds of formal and unusual treatment of language may fit in some use cases like finance applications well, however it may undermine the acceptance of a therapist chatbot, for instance. Accuracy of the agent to process the response of users could be another challenge. For example, if instead of giving the simple answer, “yes, I consent” or “I do not”, they say “ok, I consent but you cannot process my secrets about my family especially my husband!”. Then, the approach adopted by traditional applications (mobile apps, websites etc.), where individuals are asked to actively opt in to consent, might be an option for chatbots as well. This needs to be explored.
3.3. Personalised chatbots vs. the right to be forgotten and the storage limitation principle
Compliance is also challenged by difficulties in guaranteeing right to erasure. Even though it could be technically straightforward to delete the previous conversations of a user, this will make personalisation impossible and undermine effective use of agents. For instance, Amazon Alexa allows deletion of voice recordings but also informs their users about potential problems: “Voice recordings are used to improve the accuracy of your interactions with Alexa. Deleting voice recordings associated with your account may degrade your experience” (amaz, ). A better solution could be deletion of personal data that seemed to be too sensitive to users after a dialog. However, questions then arise about how to request such a deletion from a chatbot and whether deletion should cover the entire two-way conversation. For example should agents support requests like, “forget everything I told you after arguing with my boss”, “forget my ethnic origin” or “don’t store conversations we have about my mental health”? Anonymisation techniques could be applied as done in several domains to comply with GDPR, however, there are likely to be additional challenges in assuring anonymisation given the more fluid nature of conversational data.
The storage limitation principle could also be of concern for conversational agents. For a mobile application, it is reasonable to argue that the personal information of a user becomes useless when they deregister an application and there is no reason to keep it. However, in AI-based, intelligent applications like chatbots, one possible counterargument is that processing is necessary for the legitimate interests of the data controller (e.g., a chatbot developer). The controller needs this information to train the agent which could be argued as a lawful basis to keep the data. However, this approach may not be inline with the purpose limitation principle which prevents personal data to be processed further for a new purpose that is not compatible with the original one. Anonymisation by removing the identifiers of a person after a period of time may be the optimal solution for compliance. However, the exact meaning of ‘erasure’ is ambiguous for this solution. It is possible to argue that erasure refers to an outright deletion of entire conversations considering the possibility of identifying a data subject by other personal information they shared in the messages. How should this work, therefore?
3.4. How to handle unneeded personal information?
The interactive and conversational nature of chatbots also provides a challenge for the data minimisation principle. A chatbot may end up processing several items of sensitive personal information even if it does not expect (nor has it asked for) it. For instance, a user may prefer to disclose their ethnic origin while answering the questions of an agent about their stress level intentionally or unintentionally. Or, in a finance case, they may disclose account number and PIN to get an account balance (see (cnetamazon, ; guardiangoogle, )). In such a case, in theory, it may be expected for an agent to avoid storing this information. Those inputs will inevitably be processed to generate an appropriate response. It is hard to find the correct strategy for a chatbot so that it can still give reasonable replies to the user and fully respect their privacy at the same time, especially in cases where special categories of personal data (e.g., ethnicity or sexual orientation) are processed. It may be technically possible to avoid asking sensitive questions, however, we should keep in mind that the answer may still expose sensitive information. How, therefore, should agents be designed to cater for such eventualities?
In summary, while there is a need for personal user information to design and develop chatbots, it is also important to consider principles, lawful bases and rights under regulations such as the GDPR. This is clearly an area in need of more research as we, as a society, attempt to balance the advantages of agents with the need for privacy and the respective data protection laws and regulations.
- (1) M. Neururer, S. Schlögl, L. Brinkschulte, and A. Groth, “Perceptions on authenticity in chat bots,” Multimodal Technologies and Interaction, vol. 2, no. 3, p. 60, 2018.
- (2) A. Chaves and M. Gerosa, “How should my chatbot interact? a survey on human-chatbot interaction design.(2019),” arXiv preprint arXiv:1904.02743, 2019.
- (3) European Parliament, “Regulation (EU) (2016) 2016/679 of the European Parliament and of the Council of 27 April on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation),” Official Journal of the European Union 59(L 119), 2016. [Online]. Available: https://eur-lex.europa.eu/eli/reg/2016/679/oj
D. Peras, “Chatbot evaluation metrics,”Economic and Social Development: Book of Proceedings, pp. 89–97, 2018.
- (5) M. Skjuve and P. B. Brandtzæg, “Chatbots as a new user interface for providing health information to young people,” Youth and news in a digital media environment–Nordic-Baltic perspectives, 2018.
- (6) S.-T. Lai, F.-Y. Leu, and J.-W. Lin, “A banking chatbot security control procedure for protecting user data security and privacy,” in International Conference on Broadband and Wireless Computing, Communication and Applications. Springer, 2018, pp. 561–571.
- (7) Amazon, “Review your alexa voice history,” n.d. [Online]. Available: https://www.amazon.co.uk/gp/help/customer/display.html?nodeId=GHXNJNL TRWCTBBGW
- (8) CNET, “Amazon Echo banking: Get Alexa to check your balance, make payments and more,” 2019. [Online]. Available: https://www.cnet.com/how-to/amazon-echo-banking-get-alexa-to-check-your-balance-make-payments-and-more/
- (9) The Guardian, “Hey Google, lend me a tenner? NatWest trials voice banking,” 2019. [Online]. Available: https://www.theguardian.com/business/2019/aug/12/hey-google-lend-me-a-tenner-natwest-trials-voice-banking