Achieving Ethical Algorithmic Behaviour in the Internet-of-Things: a Review

10/22/2019 ∙ by Seng W. Loke, et al. ∙ 0

The Internet-of-Things is emerging as a vast inter-connected space of devices and things surrounding people, many of which are increasingly capable of autonomous action, from automatically sending data to cloud servers for analysis, changing the behaviour of smart objects, to changing the physical environment. A wide range of ethical concerns has arisen in their usage and development in recent years. Such concerns are exacerbated by the increasing autonomy given to connected things. This paper reviews, via examples, the landscape of ethical issues, and some recent approaches to address these issues, concerning connected things behaving autonomously, as part of the Internet-of-Things. We consider ethical issues in relation to device operations and accompanying algorithms. Examples of concerns include unsecured consumer devices, data collection with health related Internet-of-Things, hackable vehicles and behaviour of autonomous vehicles in dilemma situations, accountability with Internet-of-Things systems, algorithmic bias, uncontrolled cooperation among things, and automation affecting user choice and control. Current ideas towards addressing a range of ethical concerns are reviewed and compared, including programming ethical behaviour, whitebox algorithms, blackbox validation, algorithmic social contracts, enveloping IoT systems, and guidelines and code of ethics for IoT developers - a suggestion from the analysis is that a multi-pronged approach could be useful, based on the context of operation and deployment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Internet-of-Things (or IoT, for short) involves devices or things connected to the Internet or with networking capability. This includes Internet devices such as smartphones, smartwatches, smart TVs, smart appliances, smart cars, smart drones, as well as everyday objects with Bluetooth, 3G/4G, and WiFi capabilities. Specialised IoT protocols such as NB-IoT, Sigfox and LoRAWAN provide new connectivity options for the IoT.111https://www.rs-online.com/designspark/eleven-internet-of-things-iot-protocols-you-need-to-know-about

Apart from industrial IoT systems, what is beginning to emerge is the notion of everyday objects with

  • Internet or network connectivity (e.g., WiFi or Bluetooth enabled),

  • sensors (e.g., think of the sensors in the smartphone but also in a fork to detect its movement and how fast people eat [41, 44]222See https://www.hapi.com/product/hapifork),

  • computational ability (e.g., with embedded AI [105] and cooperation protocols), and

  • actuators, or the ability to affect the physical world, including taking action autonomously.

The above highlights only some aspects of the IoT - an extensive discussion on the definition of the Internet of Things is in [60].

There are also new home appliances like Amazon Alexa333https://developer.amazon.com/alexa and Google Home444https://madeby.google.com/home/, which have emerged with Internet connectivity as central to their functioning, and often, they can be used to control other devices in the home. When things are not only Internet-connected but addressable via Web links or URLs (the Uniform Resource Locators), and communicate via Web protocols (e.g., using the Hypertext Transfer Protocol (HTTP)) the so-called Web of Things555https://www.w3.org/WoT/ emerges.

With increasing autonomy (fuelled by developments in Artificial Intelligence (AI)) and connectivity (fuelled by developments in wireless networking), there are a number of implications:

  • greater cooperation among IoT devices can now happen - devices that were previously not connected could now not only communicate (provided time and resource constraints allow) but carry out cooperative behaviours. In fact, the work in [93] envisions universal machine-to-machine collaboration across manufacturers and industries by the year 2025, though this can be restricted due to proprietary data; having a cooperation layer above the networking layer is an important development - the social IoT has been widely discussed [95, 73, 54, 30];

  • network effects emerge: the value of a network is dependent on the size of the network; the greater the size of the network, the greater the value of joining or connecting to the network, so that device manufacturers could tend to favour cooperative IoT (e.g., see the Economics of the Internet of Things666https://www.technologyreview.com/s/527361/the-economics-of-the-internet-of-things/); a device that can cooperate with more devices could have greater value, compared to ones that cooperate with only a few - such cooperation among devices can be triggered by users directly or indirectly (if decided by a device autonomously), with consequent impact on communication latency and delay;

  • devices which are connected to the Internet are controllable via the Internet, which means they are also vulnerable to (remote) hacking, in the same way that a computer being on the Internet can be hacked;

  • sensors on such IoT devices gather significant amounts of data and, being Internet-enabled, such data are typically uploaded to a server (or to a Cloud computing server somewhere); potentially, such data can cause issues with people who are privacy-conscious (e.g., data from an Internet connected light bulb could indicate when someone is home and not home); an often linked topic to the IoT is the notion of data analytics due to the need to process and analyze data from such sensing devices;

  • IoT devices might be deployed over a long time (e.g., embedded in a building or be part of urban street lighting) so that they need to be upgraded (or their software upgraded) over the Internet as improvements are made, errors are found and fixed, and as security vulnerabilities are discovered and patched;

  • non-tech savvy users might find working with Internet-connected devices challenging (e.g., set up and maintenance, and be unaware of security or privacy effects of devices), and users might feel a lost of control; and

  • computation on such devices suggests greater autonomy and more complex decision-making is possible (and devices with spare capacity can also be used to supplement other devices); in fact, autonomous behaviour in smart things are not new’ smart things detecting sensor-based context and responding autonomously (using approaches ranging from simple Even-Condition-Action rule-based to sophisticated reasoning in agent-based approaches) have been explored extensively in context-aware computing [56, 20].

From the above, one can see that the IoT offers tremendous opportunity, but also raises a range of ethical concerns. Prominent Computer Scientists have noted the need for ethical policies to guide IoT governance, in the areas of privacy rights, accountability for autonomous systems, and promoting the ethical use of technologies [13].

This paper aims to review the landscape of ethical concerns and issues that have arisen and which could, in future, arise with the Internet-of-Things, focusing on device operations and accompanying algorithms, especially as algorithms are increasingly embedded in, and run on, IoT devices enabling devices to take action with increasing autonomy; the paper also reviews current ideas for addressing these concerns. Based on the review, it is suggested that a multi-pronged approach can be useful for achieving ethical algorithmic behaviour in connected things.

1.1 Scope and Context

There have been much recent thinking on how Artificial Intelligence (AI) technologies can be integrated with IoT, from applying AI algorithms to learn from IoT data, multiagent views of IoT, to connected robots [105, 88].777https://emerj.com/ai-sector-overviews/artificial-intelligence-plus-the-internet-of-things-iot-3-examples-worth-learning-from/ As Artificial Intelligence (AI) capabilities become embedded into IoT devices, the devices gain greater autonomy and decision-making capabilities, automating a wider range of tasks, so that some things can be described as “robotic”. For example, we can imagine a bookshelf that one can talk to and which can serve us, relocating and reorganising books at our command, or a library where storage and retrieval of (physical) books is automated, or think of a standing lamp that follows and tracks the user as the user moves around on the sofa or in the room - a question is whether these libraries and the standing lamp can be considered “robots”. Also, autonomous connected vehicles [55], with Internet-enabled networking and vehicle-to-vehicle connectivity, have also captured the world’s imagination and have enjoyed tremendous development in recent years. The discussion in this paper, hence, includes robots, AI as used with IoT, and autonomous vehicles, under a broad definition of IoT. The link between IoT and robotics has also been made in [86, 77], yielding the notion of the Internet of Robotic Things.

Ethics in AI has been extensively discussed elsewhere (e.g.,  [107] and ethical AI principles888https://futureoflife.org/ai-principles/), and indeed, the integration of AI technologies into the IoT, as mentioned above, calls for ethical AI principles to be considered in the development of such IoT systems. Hence, this paper reviews work not only in ethical issues in IoT but also includes a review of work on ethics in AI and robotics in the context of IoT.

While this paper reviews technical approaches mainly from computing - the issues are often interdisciplinary, at the intersection of computing technology, philosophy (namely, ethics), law and governance (in so far as policies are involved), as well as diverse application domains (e.g., health, transport, business, defence, law, and others) where IoT has been applied. Moreover, while security and data privacy are key concerns in relation to things behaving ethically, the concerns on ethical behaviour go beyond security and privacy. The field is also continually growing in recent years as ethical issues for IoT are highlighted in mass media and with much research in the area (examples highlighted in the following sections), and hence, the paper does not seek to completely cover all recent work, but can only provide a comprehensive snapshot and introduction to the area, while highlighting potential approaches to the issues.

The seminal review on ethics of computing [91] lays out five aspects of each paper reviewed: ethical theory that aids interpreting the issue, the methodology applied, the technology context, and contributions and recommendations. Different from [91], this paper paper focuses on the ethical issues in IoT work, but, indeed, these aspects have informed the reading of work in this area at the junction of the IoT and ethics. This paper touches on a range of ethical issues noted in the paper, namely, agency, autonomy, consent, health, privacy, professionalism, and trust in our discussions. For example, we discuss issues of user choice and consent in IoT devices, autonomy of things in their function, consider health IoT issues, security, trust and privacy of IoT devices, and code of ethics for IoT developers. We do not discuss ethical issues in relation to inclusion and the digital divide but retain a technical focus in this paper.

The survey on foundational ethical issues in IoT [5] focused on informed consent, privacy, security, physical safety, and trust and noted, importantly, that these are inter-related in IoT. This paper also discusses a range of these issues but we also consider examples and solutions (many originally from outside typical IoT research areas) to achieving ethical IoT systems.

1.2 Organization

The rest of the paper is organised as follows. To introduce readers to ethical issues in IoT, the next section first discusses, via examples, ethical concerns with IoT. Then, the following section examines ideas which have been proposed to address these concerns, and notes the need for a multi-pronged approach. The final section concludes with future work.

2 Ethical Concerns and Issues

This section reviews ethical concerns and issues with IoT devices and systems, via examples in multiple application domains, including the need for consumer IoT devices to employ adequate security measures, ethical data handling by health related IoT systems, right behaviour of autonomous vehicles in normal usage and dilemma situations, usage concerns with connected robots and ethical robot behaviour, algorithmic bias that could be embedded into IoT systems, right behaviour when IoT devices cooperate, and user choice restrictions or lost of control with automated IoT systems. Below, the unit of analysis is either an individual IoT device or a collection (or system) of such IoT devices (the size of which depends on the application).

2.1 Unsecured Consumer IoT Devices

The security and data privacy issues in IoT are well surveyed and have been discussed extensively, e.g., in [81, 34, 84, 58, 9, 80, 3, 96, 24, 43]. The contents of the surveys are not repeated here but some examples of issues with unsecured IoT devices are highlighted below.

There are IoT devices which may have been shipped without encryption (with lower computational power devices which are not capable of encrypted communications). A study by HP999https://community.softwaregrp.com/t5/Protect-Your-Assets/HP-Study-Reveals-70-Percent-of-Internet-of-Things-Devices/ba-p/220516#.WiY9QmLZXDv suggested 70 percent of IoT devices use unencrypted communications. However, it must be noted that cheaper does not necessarily mean less secure as cost depends on a range of factors beyond security capability.

A Samsung TV was said to listen in on living room conversations as it picks up commands via voice recognition. The company has since clarified that it does not record correctly conversations arbitrarily.101010http://abcnews.go.com/Technology/samsung-clarifies-privacy-policy-smart-tv-hear/story?id=28861189 However, it does a raise a concern with devices in the same category as voice-activated or conversational devices, whether they do record conversations.

In an experiment111111https://www.timesofisrael.com/israeli-hackers-show-light-bulbs-can-take-down-the-internet/ at Israel’s Weizmann Institute of Science, researchers managed to fly a drone within 100 metres of a building and remotely infect light bulbs in the building by exploiting a weakness in the ZigBee Light Link protocol, used for connecting to the bulbs. The infected bulbs were then remotely controlled via the drone and made to flash ‘SOS’.

A report on the Wi-Fi enabled Barbie doll121212https://www.theguardian.com/technology/2015/nov/26/hackers-can-hijack-wi-fi-hello-barbie-to-spy-on-your-children noted that they can be hacked and turned into surveillance devices. This was then followed by a FBI advisory note on IoT toys,131313https://www.ic3.gov/media/2017/170717.aspx about possible risk to private information disclosure. A 11-year old managed to hack into a Teddy Bear via Bluetooth.141414https://securityintelligence.com/news/with-teddy-bear-bluetooth-hack-11-year-old-proves-iot-security-is-no-childs-play/ And the ubiquitous IoT cameras have certainly not been free from hacking.151515http://www.zdnet.com/article/175000-iot-cameras-can-be-remotely-hacked-thanks-to-flaw-says-security-researcher/

There are many other examples of IoT devices getting hacked.161616https://www.wired.com/2015/12/2015-the-year-the-internet-of-things-got-hacked/ As research shows,171717https://arxiv.org/pdf/1705.06805.pdf someone can still infer people’s private in-home activities by monitoring and analysing network traffic rates and packet headers, even when devices use encrypted communications.

The above are only a few of many examples and has implications for developers of IoT devices which must incorporate security features, for policy-makers, for cyber security experts, as well as for users who would need to be aware of potential risks.

Recent surveys also highlighted privacy and managerial issues with the IoT [103]. From the Australian privacy policy perspective [17], after a review of the four issues of (1) IoT based surveillance, (2) data generation and use, (3) inadequate authentication and (4) information security risks, the conclusion is that the Australian Privacy Principle is inadequate for protecting the individual privacy of data collected using IoT devices, and that given the global reach of IoT devices, privacy protection legislation is required across international borders. Weber [102] calls for new legal approaches to data privacy in the IoT context, from the European perspective, based on improved transparency and data minimization principles. The recent European regulation, the General Data Protection Regulation (GDPR)181818https://eugdpr.org/ is a law aimed at providing people with greater control of their data, and has implications and challenges for IoT systems, with requirements on systems such as privacy-by-design, the right to be forgotten or data erasure, the need for clarity in requesting consent, and data portability where users have the right to receive their own data, as discussed in [100]. Companies are already coming on board with tools to support GDPR requirements.191919For example, see Microsoft’s tools: https://www.microsoft.com/en-au/trust-center/privacy/gdpr-overview and Google: https://cloud.google.com/security/compliance/gdpr/

A recent workshop on privacy and security policies for IoT at Princeton University202020https://citp.princeton.edu/event/conference-internet-of-things/ has raised a range of issues, suggesting a still on-going area of research at the intersection of IT, ethics, governance and law. Cyber physical systems security is discussed extensively elsewhere [42].

Security also impacts on usability since additional measures might be taken to improve security, for example, when users have to reset passwords before being allowed to use a device, the use of multi-factor authentication schemes, and configuration set up to improve security during use, all of which could reduce usability; the work in [25] highlights the need to consider the usability impact of IoT security features at design time.

2.2 Ethical Issues with Health Related IoT

IoT medical devices are playing an increasingly critical role in human life, but as far back as 2008, implantable defibrillators have been known to be ‘hackable’ [39], allowing communications to them to be intercepted.

Apart from the security of IoT devices, in [63], a range of ethical issues with the use of IoT in health were surveyed, including:

  • personal privacy: this relates not just to privacy of collected data, but the notion that a person has the freedom not to be observed or to have his/her own personal space; the use of smart space monitoring (e.g., a smart home or in public spaces such as aged care facilities) of its inhabitants raises the concern of continual observation of people, even if it is for their own good - being able to monitor individuals or groups can be substantially beneficial but presents issues of privacy and access;

  • informational privacy: this relates to one’s ability to control one’s own health data - it is not uncommon for organizations to ask consumers for private data with the promise that the data will not be misused - in fact, privacy laws could prohibit use of the data beyond its intended context - the issues are myriad (e.g., see [18]), including how one can access data collected by an IoT device but now possibly owned by the company, how much an insurance company could demand of user health data,212121https://www.iothub.com.au/news/intel-brings-iot-to-health-insurance-411714 how one can share data in a controlled manner, how one can prove the veracity of personal health data, and how users can understand the privacy-utility tradeoffs when using an IoT device;

  • risk of non-professional care: the notion of self health-monitoring and self-care as facilitated by health IoT devices can provide a false optimism, limiting a patient’s condition to a narrow range of device-measurable conditions; confidence in non-professional carers armed with IoT devices might be misplaced.

The above issues relate mainly to health IoT devices but the data privacy issues apply to other Internet connected devices in general [72].222222See also http://arno.uvt.nl/show.cgi?fid=144871 Approaches to data privacy range from privacy-by-design, recent blockchain-based data management and control (e.g., [4, 31, 37, 78, 106]), to regulatory frameworks that aim to provide greater control over data to users as reviewed earlier, e.g. in [100, 24, 84, 43]. There are also issues related to how health data should or should not be used - for example, what companies are allowed to use the health data for (e.g., whether an individual could be denied insurance based on health data, or denied access to treatment based on lifestyle monitoring).

In relation to IoT in sports to help sports training and fitness tracking, incorporated in an artificial personal trainer, there are numerous technical challenges [32], including generating and adapting plans for the person, measuring the person’s readiness, personal data management, as well as validation and verification of fitness data. One could think of issues and liability with errors in measurement or an athlete being endangered or subsequently even injured over time by erroneous advice due to incorrect measurements, or issues arising from following the advice of an AI trainer or such devices being hacked. In any case, there are already several wearable personal trainers on the market232323E.g., see https://welcome.moov.cc/ and https://vitrainer.com/pages/vi-sense-audio-trainer which come with appropriate precautions and disclaimers for users242424See https://welcome.moov.cc/terms/ and https://vitrainer.com/pages/terms-and-conditions and privacy policies.

2.3 Hackable Vehicles and the Moral Dilemma for Autonomous Vehicles (AVs)

Cars with computers are not unhackable, an example is the Jeep which was hacked while on the road,252525https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/ and made to be remotely controllable. With many vehicles having Internet connectivity, their hackability is now public knowledge.262626For example, see the online book on car hacking, http://opengarages.org/handbook/ Similar security issues of encrypting communications with the vehicle, securing the vehicle system arises, and managing data collected by the vehicle, arise as in other IoT systems - security issues for autonomous vehicles are discussed elsewhere [50, 65]. Given the wide range of data collected about the vehicle, from telemetry data to location data, as well as logs of user interaction with the vehicle computer, privacy management of vehicular data is an issue [67].

Recent developments in autonomous vehicles have provided tremendous promise for reducing road injuries and deaths due to human error, as well as the potential to start a ‘new’ industry, with many countries around the world working on autonomous vehicle projects,272727https://avsincities.bloomberg.org/global-atlas with subsequent impact on the design of cities.282828See http://www.nlc.org/AVPolicy and https://www.wired.com/2016/10/heres-self-driving-cars-will-transform-city/ However, as autonomous vehicles function in a socio-technical environment, there could be decisions they need to make, which involves moral reasoning as discussed in [15].303030See also this TED talk by Professor Iyad Rahwan from MIT292929https://rahwan.me/

Essentially, the moral dilemma of autonomous vehicles is similar to the trolley problem in philosophy313131See https://fee.org/articles/the-trolley-problem-and-self-driving-cars/ - suppose an autonomous vehicle is about to hit someone in its way but the only way to avoid this is to swerve to the right or left, but will kill some pedestrians while doing so - or should it opt to protect the occupants of the vehicle in preference to external individuals. Either way, someone will be killed, what should the autonomous vehicle do?323232 Such dilemma situations can occur in other smart things scenarios - e.g., consider this original example: in a fire situation, a smart door can decide to open to let someone through but, at the same time, would allow smoke to pass in to possibly harm others, or a smart thing can choose to transmit messages about a lost dementia person frequently to allow finer-grained tracking for a short time, but risk the battery running out sooner (and so losing the person, if not found in time), or transmit less frequently allowing longer operating time but coarse-grained location data. While there may be no clear-cut answer for the question, it is important to note the ethical issue raised - potential approaches to the problem are discussed later. While AVs will help many people, there are issues about what the AVs will do in situations where trade-offs are required. A utilitarian approach might be to choose the decision which potentially kills fewer people. A virtue ethics approach will not approve of that way of reasoning. A deontological or virtuous approach might decide ‘not to kill’ whatever the situation, in which case, the situation cannot be helped. One could also argue that such situations are unlikely to arise, but there is also a small possibility that it could arise, and perhaps in many different ways. Imagine an AV in a busy urban area receiving an instruction to speed up due to a heart attack just happening in its passenger, but this puts other pedestrians and road users at greater risk - should the AV speed up? However, one could also note that sensors in the vehicle could detect that the passenger has a heart attack and report this to traffic management to have a path cleared, and so, speeding up may not be an issue - connectivity, hence, can help the situation rather than increase risk, while the ethics in the decision-making remains challenging.

Ethical guidelines regulating the use and development of AVs are being developed - Germany was the first country to provide ethical guidelines for autonomous vehicles via the Ethics Commission on Connected and Automated Driving.333333See the report at https://www.bmvi.de/SharedDocs/EN/Documents/G/ethic-commission-report.pdf?__blob=publicationFile The guidelines include an admission that autonomous driving cannot be completely safe: “… at the level of what is technologically possible today, and given the realities of heterogeneous and non-interlinked road traffic, it will not be possible to prevent accidents completely. This makes it essential that decisions be taken when programming the software of highly and fully automated driving systems.” As noted in [53], for “ future autonomous cars, crash-avoidance features alone won’t be enough”, but when crash is inevitable, there needs to be a crash-optimization strategy but that strategy should not aim only to minimise damage - since if that was the case, the vehicle might decide to crash into a cyclist wearing a helmet than a cyclist not wearing a helmet, hence, targeting people who chose to be safer - there is no clear resolution of this ethical issue, as yet.

There are also issues concerning who will take responsibility when something bad happens in an autonomous vehicle - whether it would be the passengers, the manufacturer or middlemen. The issue is complex in mixed scenarios where human drivers and autonomous vehicles meet in an incident, and the fault lies in the human driver, but the autonomous vehicle was unable to react to the human error.

But assuming the success of autonomous vehicles to reduce road deaths and accidents, would it then be ethical to allow human drivers? The work in [89] goes as far as to suggest: “…making it illegal to manufacture vehicles that allow for manual driving beyond a certain date and/or making it illegal, while on a public road, to manually drive a vehicle that has the capacity for autonomous operations.” Appropriate policies for autonomous vehicles continues to be an open issue [11]. Further approaches to ethical automated vehicles will be discussed in Section III.

2.4 Roboethics

Roboethics [51, 52, 98] is concerned with positive and negative aspects of robotics technology in society, and explores issues concerning the ethical design, development and use of robots. While there are tremendous opportunities in robotics, their widespread use also raises ethical concerns, and as the line between robots and autonomous IoT becomes blurred, the issues of ethics with robots are inherited by IoT.

2.4.1 Robots Rights

There are some schools of thought that have begun to ask the question of whether robots (if capable of moral reasoning) should have rights [38], and what level of autonomous decision-making would require robots to have rights, similar to how animals might have rights. Indeed, the level of autonomy and sentience required of such machines before rights becomes an issue might still be far off. In fact, roboethics has largely been concerned with ethics that developers and users of the technology need to consider. Below, we explore examples of ethical issues in robotics for surgery, personal assistance, and war.

2.4.2 Robotic Surgery

Robots are capable of surgical operations as we have seen, typically under the direction and control of a surgeon. In 2000, the U.S. Food and Drug Administration (FDA) approved the use of the Da Vinci robotic surgical system for a surgeon to perform laparoscopic gall bladder and reflux disease surgery. Robotic surgery devices continue to be developed,343434https://spectrum.ieee.org/robotics/medical-robots/would-you-trust-a-robot-surgeon-to-operate-on-you and some make decisions autonomously during surgeries, e.g., to automatically position a frame for the surgeon’s tools, where to cut bones, and delivering radiation for tumours. If costs of robotic surgery could be reduced, complex surgery could perhaps be made available to more people in third-world and developing countries. As they get better, and can provide surgical help at lower costs, what is problematic is then not their use but denying people their use.

But an issue emerges when something goes wrong and the question of accountability and liability arises regarding the patient’s injury. While one might not consider surgical robots as IoT devices, the issue of IoT devices making decisions that could result directly in injury or harm, even if they were intentionally made to help humans raises similar concerns.

2.4.3 Social and Assistive Robots and Smart Things

Social robots might play the role of avatars (remotely representing someone), social partners (accompanying someone at home), or cyborg extensions (being linked to the human body in some way). A robot capable of social interaction might be expected to express and perceive emotions, converse with users, imitate users, establish social connection with users via gesture, gaze or some form of natural interaction modality, as well as perhaps present a distinctive personality. While they can be useful, some concerns include:

  • Social robots or IoT devices may be able to form bonds with humans, e.g., an elderly person or a child. A range of questions arises such as whether such robots should be providing emotional support in place of humans, if they can be designed to do so. Another question is what psychological and physical risk of humans forming such bonds with such devices or robots - when a user is emotionally attached to a thing, a concern is what would happen if the thing is damaged or no longer supported by the manufacturer, or what if such things can be hacked to deceive the user. This question can be considered for smart things which has learnt and adapted to the person’s behaviour and not easily replaced.

  • Such social robots or IoT devices can be designed to have authority to provide reminders, therapy or rehabilitation to users. Ethical issues can arise when harm or injury is caused due to interaction with such robots. For example, death is caused from medication taken at the wrong time due to a robot’s reminder at the wrong time due to a malfunction. A similar concern carries over to a smart pill bottle (an IoT device) intended to track when a person has and has not taken medication with an associated reminder system. There is also a question of harm being caused inadvertently, e.g., when an elderly person trips over a robot that approached too suddenly, or a robot makes decisions on behalf of its owner, without the owner’s full consent or before the owner could intervene.

It must be noted, though, that the concerns above relate to behavioural aspects of the devices, not so much to the connectivity that the devices might have.

Ethical guidelines regarding their development and use are required, including training of users and care-givers, affordability of such devices, and prevention of malpractice or misuse.

Ethical principles for socially assistive robots are outlined here353535https://robotics.usc.edu/publications/media/uploads/pubs/689.pdf, including

“The principles of beneficence and non-maleficence state that caregivers should act in the best interests of the patient and should do nothing rather than take any action that may harm a patient.”

A similar guideline informs socially assistive smart things, not only robots - how smart things with intelligent and responsive behaviours and robots could be programmed to provably satisfy those principles is an open research issue. It remains an open research issue how smart things and robots could learn human values and be flexible enough to act in a context-aware manner. Issues specifically due to the fact that these devices might be connected are similar to other IoT devices, e.g., sensitive private data possibly shared beyond safe boundaries, and vulnerability to remote hacking - perhaps made worse by their close interaction with and proximity to users.

2.4.4 Robots in War

Robots can be used to disarm explosive devices, 24/7 monitoring, or for risky spying missions, and engage in battles in order save lives. But there are already controversies surrounding the use of automated drones (even if remotely human-piloted) in war [28]. While human casualties can be reduced, the notion of humans being out of the loop in firing decisions

is somewhat controversial. AI also might not have adequate discriminatory powers for its computer vision technology to differentiate civilians from soldiers. While robots can reduce human lives lost at war, there is also the issue that it could then lower barriers to entry and even ‘encourage’ war, or be misused by ‘tyrants’. There have been the call for a ban on autonomous weapons in an open letter signed by 116 founders of robotics and AI companies from 26 countries,

363636https://futureoflife.org/2017/08/20/killer-robots-worlds-top-ai-robotics-companies-urge-united-nations-ban-lethal-autonomous-weapons/ and the Campaign to stop Killer Robots.373737http://www.stopkillerrobots.org/ Algorithmic behaviour can be employed in remotely controlled robots to help human operators, but remote controlled and autonomous robotic weapons, if allowed, will need to be designed for responsibility, i.e., allow human oversight and control.383838https://www.oxfordmartin.ox.ac.uk/downloads/briefings/Robo-Wars.pdf Robot-on-robot warfare might still be legal and ethical.

2.5 Algorithmic Bias and IoT

We explore the notion of bias in algorithms in this section. The following types of concerns with algorithms were noted by [62]: inconclusive evidence

when algorithms make decisions based on approximations, machine learning techniques, or statistical inference,

inscrutable evidence where the workings of the algorithm is not known, and misguided evidence when the inputs to the algorithm are inappropriate. Some automated systems have behaviour which can be opaque, unregulated and could amplify biases [69], inequality [29], and racism [68].

Note that while such bias are not specifically situated in IoT systems, and there are IoT systems which do not interact with humans directly, such issues are relevant as there are also IoT devices with Internet applications that often employ face recognition algorithms, voice recognition algorithms (e.g., Google Home and Amazon Echo) and aim to present users with a summary of recent news and product recommendations.

Algorithmic bias can arise in autonomous systems [21], and could arise in IoT devices as they become increasingly autonomous devices. An IoT device that behaves using a machine learning algorithm, if trained on bias data could yield bias behaviour. With the increasing data-driven nature of IoT devices, a number of possible opportunities for discrimination can arise as noted in [97] - examples given include an IoT gaming console and neighbourhood advisor that advises avoiding certain areas. Also, such algorithmic bias can be in machine learning algorithms used for autonomous vehicles, where large volumes of data over time frames of minutes to days are analysed.

Even without using machine learning, it is not to difficult to think of devices that can exhibit biased behaviour - consider a sensor that is biased in the information it captures, intentionally or unintentionally, or a robot that greeds certain type of people. Such a robot might be programmed to randomly choose who it greets, but it may happen to appear to only greet certain individuals, and so, be perceived as bias.

2.5.1 Racist Algorithms

While the algorithms or their developers might not be intentionally racist, as machine learning thrives on data they are trained on, bias can be introduced, even unintentionally. Hence, an algorithm may not be built to be intentionally racist but a failure of a face recognition algorithm on those with darker skin393939For example, see https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/discussion?curator=TechREDEF colour could raise concerns and cause a category of people to feel discriminated against. A device that has been trained to work in a certain context might not work as expected in a different context - a type of transfer bias - a simple example of a smart door trained to open based on recognizing fair-skinned persons might not open for dark-skinned persons.

How IoT devices interface with humans could be bias by design, even if not intentionally so, but simply due to inadequate consideration.

As reported in the Technology Review on bias in natural language processing systems,

404040https://www.technologyreview.com/s/608619/ai-programs-are-learning-to-exclude-some-african-american-voices/ due to the use of machine learning to learn how to recognise speech, there are issues with minority population groups due to lack of training examples for the machine learning algorithms: “If there aren’t enough examples of a particular accent or vernacular, then these systems may simply fail to understand you.” The original intention and motive of developers could be considered when judging algorithmic bias, and care is needed to determine if and when bias does arise, even if not originally intended, especially with machine learning on data.

2.5.2 Other Algorithmic Bias

We have looked at how algorithms might appear to be racially biased in its inference, but there could be other bias in general. For example, in politics, where the algorithm tends to favour a given political party more than others, or in business, where a particular brand of goods is favoured over others. And suppose an algorithm used in recommending news articles or products for you does so in a systematically bias manner - it could then have an influence in your voting or buying behaviours. An algorithm that provides possibly biased recommendations or news is an issue that has put Facebook in the news, when it was said to “deliberately suppressing conservative news from surfacing in its Trending Topics”.414141https://www.wired.com/2016/05/course-facebook-biased-thats-tech-works-today/ To provide greater transparency, Facebook also begin to describe how it recommends and filters news in the Trending Topics section,424242http://fortune.com/2016/05/12/facebook-and-the-news/ perhaps in being more open to the public. Other concerns are on how Twitter provides algorithmically filtered news feeds to users.434343http://fortune.com/2016/02/08/twitter-algorithm/

But what if the agenda is a “good” one, e.g. algorithms being informed by a utilitarian mandate. But this raises the ethical question of whether software should be programmed to always benefit as many people as possible, even at the cost of a few - considering a hypothetical “smart” water rationing system in homes, where water is conserved for all at the sacrifice of some urgent uses. Also, taking a broader sustainability view, IoT systems can help cities move towards smarter more energy-efficient homes, smarter waste management and smarter energy grids, helping to achieve sustainable development goals.444444For example, see https://deepblue.lib.umich.edu/bitstream/handle/2027.42/
136581/Zhang_TheApplicationOfTheInternetOfThingsToEnhance
UrbanSustainability.pdf?sequence=1&isAllowed=y and
http://www3.weforum.org/docs/IoTGuidelinesforSustainability.pdf
Another example is IoT-based infrastructure monitoring helping to reduce urban flooding.454545https://www.weforum.org/agenda/2018/01/effect-technology-sustainability-sdgs-internet-things-iot/ However, how, in general, automated IoT systems should balance priorities within an overall sustainability agenda, without bias towards or against any community groups, can be a consideration from the system design phase.

Moreover, there could be an issue with human values and bias being essentially incorporated into algorithms or into the design of IoT devices. The so-called value-laden algorithms by [46] can be defined as follows: “An algorithm comprises an essential value-judgment if and only if, everything else being equal, software designers who accept different value-judgments would have a rational reason to design the algorithm differently (or choose different algorithms for solving the same problem).” An example discussed is that of medical image algorithms. It is noted that “medical image algorithms should be designed such that they are more likely to produce false positive rather than false negative results.” However, the increased number of false positives might lead to too many unnecessary operations. Also, this could cause alarming results for many who are then suspected or diagnosed with diseases that they do not have, due to the medical image algorithms conservatively highlighting what is possibly not there. Hence, due to the need to be conservative and to avoid missing a diagnosis, a developer of the algorithm could have made it more pessimistic so that nothing is overlooked. Or consider an Internet camera to detect intruders which gives too many false positives, in trying to be “overly protective”.

To be fair, algorithmic bias can arise due to the developers own values or due to data used in training algorithms, or simply due to cases not considered during design, and perhaps not due to malicious or intentionally biased agendas. However, an issue is how to distinguish between intentional (and malicious) and unintentional algorithmic bias.

2.6 Issues with Cooperative IoT

When IoT devices cooperate, a number of issues arise. For example, with autonomous vehicles, it is not only vehicle-to-vehicle cooperation, an autonomous vehicle could share roads with pedestrians, cyclists, and other human-driven vehicles, and would need to reason about social situations, perform social signalling with people via messaging or physical signs, and work within rules and norms for the road, which could prove to be a difficult problem.464646https://spectrum.ieee.org/transportation/self-driving/the-big-problem-with-selfdriving-cars-is-people,http://urban-online.org/en/human-factors-in-traffic/index.html

Protection from false messages, and groups of vehicles that cooperate maliciously, are also concerns, looking forward. How will a vehicle know if a message to make way is authentic? What if vehicles take turns to dominate parking spaces or gang-up to misinform non-gang vehicles about where available parking might be? Or what if vehicles of a given car manufacturer collude to disadvantage other brands of cars.

A similar issue arises with other IoT devices which must discern the truthfulness of messages they receive, and which, when cooperating, and exchanging data would need to follow policies for data exchanges. Denial-of-Service attacks where a device receives too many spurious messages must be guarded against and IoT devices should not spam other devices. The issues of trust with a large number of inter-connected devices has been explored, with a proposed trust model, in [54].

With cooperation, considerations of what data should be shared and how data is shared among cooperative IoT devices will be important. For example, if a group of vehicles share routing information in order to cooperate on routing to reduce congestion, as in  [23], there is a need to ensure that such information is not stored or used beyond their original purpose.

2.7 User Choice and Freedom

Apart from transparency of operations, systems allowing adequate user choice is also important - freedom of action is an important property of systems that are respectful of the autonomy of users, or at least a user’s direction is based on the user’s own “active assessment of factual information” [59].474747http://www.ethics.org.au/on-ethics/blog/october-2016/ethics-explainer-autonomy For example, a device can be programmed to collect data and manage data automatically (e.g., once a photo is taken by a device, it can be automatically shared with a number of parties and stored), but people would like to be informed about what data is collected and how data is used. Informing might not be adequate - a system could automatically inform a user that all photos on a smartphone will be copied to the cloud and will be categorised in a default manner on the phone, but the user might want control over what categories to use and control which photos should be copied to the cloud.

Another example is a smartphone that is programmed to only show the user certain WiFi networks, restricting user choice, or a smartphone that filters out certain recommendations from applications - which can happen without the user’s knowledge. In general, people would like to maintain choices and freedoms in the presence of automation - this is also discussed in the context of automated vehicles later.

In relation to location-based services, or more generally, context-aware mobile services, control and trust are concerns [1]. Someone might willingly give away location or contextual information in order to use particular services (an outcome of a privacy-utility trade-off), assuming s/he trusts the service; the user still retains the choice of opting in, or not, and opting out anytime during the use of the service. Tracking a child for safety can be viewed as somewhat intruding on his/her privacy, but might be insisted on by the parent. As mentioned in [1], in general, a wide range of considerations is required to judge if such context-aware services are ethical or not ethical, including rules and norms, benefits and harms, concerns of people, governing bodies, and cultural values.

3 Towards a Multi-Pronged Approach

How one can build IoT devices that will behave ethically is still a current area of research. This section reviews a range of ideas which have been proposed and applied to ameliorate the situation, including how to program ethical behaviour in devices, algorithmic transparency for accountability, algorithmic social contracts and crowdsourcing ethics solutions, enveloping IoT systems, and devising code of ethics for developers. Then, it is argued that, as each idea has its own merits and usefulness towards addressing ethical concerns, a multi-pronged approach can be useful.

3.1 Programming Ethical Behaviour

We review a range of techniques which have been explored for programming ethical behaviour: rule-based programming and learning, game-theoretic calculations, ethical settings and ethical-by-design.

3.1.1 Rule-Based Ethics

If we want robots or an algorithm to be ethical and transparent, perhaps one way is to encode the required behaviour explicitly in rules or to create algorithms to allow devices to calculate the ethical actions. Rules have the advantage that they can be human understandable, and they can represent a range of ethical behaviours required of the machine. Foundational ideas of ethics such as consequentialism and deontology often underpin the approaches taken. The general idea is that a device whose behaviour abides by these rules is then considered ethical.

The work in [7] describes a vision of robots and an example of coding rules of behaviour for robots. EthEL [6] is an example of a robot that provides reminders about medication. There are issues of when to notify the (human) carer/overseer when the patient does not take medication. It would be good for the patient to be respected and have a degree of autonomy to delay or not take medication, but an issue arises when, the medication, if not taken, leads to a life-threatening situation - the issue is when the robot should persist in reminding and inform the overseer, or when it does not, respecting the autonomy of the patient.

A machine learning algorithm based on inductive logic was used to learn a general ethical rule about what to do based on particular training cases given to the algorithm: “a health care robot should challenge a patient’s decision— violating the patient’s autonomy—whenever doing otherwise would fail to prevent harm or severely violate the duty of promoting patient welfare.” In 2008, this was believed to be the first robot governed by an ethical principle, where the principle was learned.

The work in [2] proposes a framework for building ethical agents and the use of norms to characterise interactions among autonomous agents, with three types of norms, namely commitments, authorizations and prohibitions. Such norms can be used by agents needing to behave ethically. Such multiagent modelling maps well to decentralized IoT systems allowing the placing of decentralised intelligence and algorithmic behaviour within the IoT [88].

Ethical questions can be a lot more complex, in general - it would be hard to encode in rules every conceivable situation where a robot should persist with its reminders, even when the patient rejects it. It remains an open research area as to what extent such rules can be coded up by a programmer, or learnt (via some machine learning algorithm), for machines in a diverse range of situations in other applications.

Another example is the work of [35] which outlines programming ethical behaviour for autonomous vehicles by mapping ethical considerations into costs and constraints used in automated control algorithms. Deontological rules are viewed as constraints in an optimal control problem of minimising costs, for example, in the case of deciding actions to minimise damages in an incident.

From the above examples, the general overarching rule that saving human life takes priority, over conforming to traffic laws and following a person’s (perhaps under-informed) decision. However, it is generally difficult to ensure that a vehicle would abide by these rules - and generally difficult for automated vehicles to assess situations accurately to know which rule applies. Also, its software would need to be tested to follow such principles, or testing be done by a certification authority though requiring tremendous resources.

In  [83], a formal model of safe and scalable self-driving cars is presented where a set of behavioural rules are specified which could be followed by cars to help ensure safety. A rule-based approach could work for specific IoT applications where the rules are identifiable and can be easily encoded.

However, in general, a difficulty is how one could comprehensively determine what the rules are for specific applications, apart from expert opinion. This raises the question of who decides what is ethical and what is not, and whether users could trust the developers who engineered the IoT systems on what is ethical behaviour. Apart from experts encodings rules, an alternative approach proposed by MIT researchers is to crowdsource human perspectives on moral decisions, as experimented with by the Moral Machine for autonomous vehicles, with interesting results including cross-cultural ethical variation [10].484848http://moralmachine.mit.edu/

System architectures for building machines capable of moral reasoning remains a research area [101, 8].494949Also https://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881#auth-1 Recent work has proposed rule-based models to encode ethical behaviour that can be computationally verified [22], and in contrast to verification approaches, an internal simulation of actions by a machine in order to predict consequences is proposed in [99].

3.1.2 Game-Theoretic Calculation of Ethics

Game-theoretic approaches have also been proposed for autonomous vehicles to calculate ethical decisions, e.g., using Rawlsian principles in contrast to utilitarian approaches [48]. The idea is to determine, given the behaviour of the party, the best outcome. A difficulty is deciding whether Rawlsian or a Utilitarian calculation should be employed, or even other schemes - the Rawlsian approach aims to maximise utility for the worst case (maximin approach) while the Utilitarian approach aims to maximise utility. It is also difficult to assign appropriate numerical values for the utilities of actions (e.g., why would hitting a pedestrian have a value of -1 while injuring a pedestrian is given a value of -0.5?).

3.1.3 Ethics Settings

Another category of work focuses on getting user input in ‘programming’ the ethical behaviour of devices, in particular, for autonomous cars. The notion of ethics settings or the “ethical knob” was proposed by [19], to allow passengers of autonomous vehicles to make choices about ethical dilemmas, rather than have the reasoning hard-coded by manufacturers. For a vehicle needing to prioritise between the safety of the passengers and of pedestrians in road situations, there are three modes, namely altruistic, egoistic and impartial, corresponding to the preferences for the safety of the pedestrian, the safety of the passenger and the safety of both, and the passenger can choose a mode somewhere in between, among the three. The idea of the ethics settings is advocated by [36], which also answers the question of what settings people should use - each person choosing the selfish ethics settings might make society worst off overall, while everyone, if this can be mandated, choosing the settings that minimise harm, even if altruistic, would make society better off.

3.1.4 Ethical by Design

In [12], an approach is to allow designers of IoT systems to configure via a set of available policy templates, which reduces the complexity of the software engineering of IoT systems, where multiple policies are relevant, e.g., a policy on storage of data, a policy on how data can be shared or a policy on ethical actions. A set of policies can be chosen by the user or the developer (in view of the user) that is tailored to the user’s capabilities and context. A framework for dynamic IoT policy management has been given in [85].

While this review does not focus on challenges of IoT data privacy specifically, the review in [79] noted that addressing IoT data privacy challenge involves designing and building in data access control and sharing mechanisms into IoT devices, e.g., building in authentication and key establishment mechanisms in IoT devices, computing on the edge to address privacy concerns, mechanisms to mask shared personal data, tools to support user access rules for data handling, and tools for digital forgetting and data summarization. In summary, one can reduce user privacy leakage and risks of IoT devices mishandling IoT data via a combination of these mechanisms.

When such mechanisms are known and existing, and as more of such mechanisms are developed, then according to [12], ethically designed IoT products (including devices and applications) are those “designed and deployed to empower users in controlling and protecting their personal data and any other information.” The idea of the “ethical knob” also seeks to put more control into the hands of users, beyond data handling. Hence, to program in ethical behaviour is not only programming IoT devices that take action based on ethical considerations, but also providing users appropriate control over device behaviour even as it has delegated authority to act autonomously.

3.2 Enveloping IoT Systems

The concept of “enveloping” was first introduced in [33] in regard to providing boundaries within which today’s AI systems can work effectively. A distinction is made between complexity of a task, in relation to how much computational resources it requires, and the difficulty of the task, relating to the physical manipulation skills it requires, e.g., the gross or fine motor skills (robotic or human) required to perform tasks such as dish washing with hands, painting with a brush, tying shoe-laces, typing, using a tool, running up the stairs, playing an instrument, or helping somewhat disabled walk or get up. Examples, taken from the paper, of envelopes for devices include, for industrial robotics, “the three-dimensional space that defines the boundaries within which a robot can work successfully is defined as the robot’s envelope”, and the waterproof box of a dishwasher, and Amazon’s robotic shelves and warehouse for its warehouse robots. It is noted that “driverless cars will become a commodity the day we can successfully envelop the environment around them.” A computer chess program can be very successful within the constraints of the rules of chess. Indeed, the idea of dedicated lanes or areas for automated vehicles can be viewed as a type of envelope for such vehicles. Hence, enveloping is a powerful idea for successful AI systems.

While it might not always be possible to envelop IoT systems, consider a generalized view of enveloping that is not just physical, but also cyber-physical, comprising the situation spaces (physical boundaries and cyber boundaries) in which a device functions. Such enveloping can help in addressing ethical issues, by reducing the complexity of the environment in which such IoT devices or robots operate, reducing the chance for unintended situations, allowing comprehensive rules to be devised in a more constrained operating environment, helping to manage human expectations (e.g., humans generally get out of the way of trains, trams and vehicles on the road), and enabling clear definition of the context of operation, e.g., algorithmic bias is then not unexpected if the context of the development of the algorithm is known, such as the training dataset used, and the Internet environment or “cyber-envelope” in which the device operates, including where data is stored and shared is explicitly co-defined, by IoT device manufacturers and users. Also, as another example, a pill-taking reminder system works within its known envelope so that unexpected behaviours when working beyond its envelope could be expected. However, enveloping can prove restrictive in the IoT, and successful enveloping to help deal with ethical IoT issues is still to be proven.

3.3 Whitebox Algorithms

As noted earlier, algorithms might be used to make decisions, that affect people in a significant way, e.g., criminal cases and whether someone should be released from prison, to whether someone is diagnosed with a particular disease. Also, certain groups of people may feel unfair if an algorithm did not work as well for them as it did for someone on account of his/her skin colour or accent.

How can one deal with algorithmic bias? Two areas of research to address this problem are noted: algorithmic transparency and detecting algorithmic bias.

3.3.1 Transparency

There are at least two aspects of transparency for IoT devices - the data traffic going into and out of such devices, and the inner workings of such devices.

For example, the TLS-RaR approach [104] allows device owners (or consumer watchdogs and researchers) to audit the traffic of their IoT devices. Affordable in-home devices called Auditors can be added to the local network to observe network data for IoT devices using Transport Layer Security (TLS) connections. However, some devices might use steganography to hide data in traffic, or users might still miss some data sent out by a malicious device.

Apart from monitoring the traffic of IoT devices, there are many who argue that algorithms that make important decisions should be a “white box” rather than a “black box” so that people can scrutinise and understand how the algorithms make decisions, and judge the algorithms that judge us. This is also a current emphasis for the explanability for Artificial Intelligence (AI) idea.505050https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence This can become an increasingly important feature for IoT devices that take action autonomously - users need to know why, for example, heating has been reduced in a smart home recently, forgetting that a target energy expenditure was set earlier in the month.

For widely used systems and devices where many people could be affected, transparency enables social accountability. IoT devices in public spaces, deployed by town council, should work according to public expectations. IoT public street lights that systematically only lights up certain segments of a road for particular shops, and not for other shops, can be seen to be bias - or at least, must be in error and be subject to scrutiny.

Consider IoT devices whose purpose is to provide information to people, or devices that filter and provide information to other devices; transparency in such devices enable people to understand (at least in part) why certain information is shown to them, or understand their behaviour. Example, Facebook has been rather open in how its newsfeed algorithm works.515151https://blog.bufferapp.com/facebook-news-feed-algorithm It can be seen that by being open about how the algorithm works, Facebook provides, to an extent, a form of social accountability.

Another way an algorithm could “expose” its workings is to output logs of its behaviour over time. For example, in the case of autonomous vehicles, if an accident happens between a human-driven car and an autonomous vehicle, one should be able to inspect logs to trace back to what happened and decide if the company should be held accountable or the human driver. This is similar to flight recorders in commercial airplanes. As another example of auditing, the Ditio [61] system is an approach for auditing sensor activities, logging activities that can be later inspected by an auditor and checking for compliance with policies. An example is given where the camera on a Nexus 5 smartphone is audited to ensure that it has not been used for recording during a meeting.

However, there are concerns with logging and whitebox views of algorithms. For example, intellectual property might be a concern when the workings of an algorithm is transparent, or when data used in training a machine learning algorithm is exposed. Care must be taken in how algorithms are made transparent. Another issue is that, often, with neural network learning algorithms, the actual rules learnt for classification and decision-making are also not explicitly represented (and are simply encoded in the parameters of the neural network). Also, what type of data or behaviour should be logged and how can they be managed remains open issues and are application-dependent.

The white box algorithm approach can be employed to expose algorithmic bias when present or to allow human judgement on algorithmic decisions, but the workings of a complex algorithm is not easily legible or understandable in every situation.

Algorithms and systems may need to be transparent by design - a software engineering challenge. The paper on “Algorthmic Accountability” by the World Wide Web Foundation525252http://webfoundation.org/docs/2017/07/Algorithms_Report_WF.pdf calls for explainability and auditability of software systems, in particular those based on machine learning, to encourage algorithmic accountability and fairness. The Association of Computing Machinery (ACM), a major computing association based in USA, calls for algorithmic transparency and accountability in a statement.535353https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf

Getting algorithms or systems to explain their own actions and audit their own execution have become current research areas, as suggested by this workshop on Data and Algorithmic Transparency.545454http://datworkshop.org/#tab_program

Also, using open source software has been argued as an approach to achieve transparency, e.g, of AI algorithms.

555555https://www.linuxjournal.com/content/what-does-ethical-ai-mean-open-source However, commercial interests might hinder free and open software.

In summary, for transparency and accountability, as noted in [87], IoT systems from the technical point of view can provide control - allowing users to control what happens, and auditing - enabling what has happened or why something is happening to be recorded. IoT systems also need to allow users to understand (and perhaps control) what data they collect and what they do with that data, to allow users to understand (and perhaps configure) their motivations,565656http://iot.stanford.edu/retreat15/sitp15-transparency-no.pdf and to see (and perhaps change), in a non-technical way, how they work.

3.3.2 Detecting Algorithmic Bias

People might stumble upon such bias when using some devices, but sometimes it may be a lot more subtle (e.g., considering a news feed where we may not realize or miss what we are not expecting). Researchers have looked at how to detect algorithmic bias using systematic testing based on statistical methods, such as in a technique called transparent model distillation [94] which we will not look at in-depth here.

3.4 Blackbox Validation of Algorithmic Behaviour

There could be situations where whiteboxing algorithms is not possible due to commercial reasons, and generating explanations from certain (e.g., deep learning) algorithms is still a challenge. It is well articulated in 

[74] that

“the study of machine behaviour will often require experimental intervention to study human–machine interactions in real-world settings”

Software testing is well studied and practised. Experimental evaluation of algorithmic behaviour to verify certain properties or capabilities might be employed, though testing device behaviour in all circumstances and environments can be challenging, especially if it connects to other devices and if there are flow-on consequences in the physical world, and considering the complexity of a device. A notion of the Turing test has been proposed for autonomous vehicles.575757For example, see https://news.itu.int/a-driving-license-for-autonomous-vehicles-towards-a-turing-test-for-ai-on-our-roads/

Where the range of possible situations and interactions with the environment are complex, apart from real-world testing, simulation-based testing and validation can be an economical solution, as noted in [47], as an example. Also, software updates are expected to occur with IoT devices and validation might then need to be redone, changes localised to particular modules, and the impact of changes on other modules assessed - the work in [27, 26] noted that testing of autonomous vehicles and autonomous systems requires such cognitive testing as it is called.

The design of the human-device interface is also a consideration if users are to exercise choice and freedom, they would need to understand the functions of the device and how to interact with it. The interface should not be too complex so that users lose comprehension yet should make available adequate choices and options - this is indeed a challenging task for a complex device. For example, for automated vehicle Human-machine interfaces (HMIs), using heuristic evaluation is one approach 

[66], where a set of criteria is used to judge the HMI.

Validation requires criteria to validate against - a safety standards approach for fully autonomous vehicles has been proposed in [45]. Similar standards of algorithmic behaviour might be devised for other types of IoT devices, e.g., for delivery robots on walkways, or robots in aged care homes.

3.5 Algorithmic Social Contracts

Going beyond the simple white box concept for algorithms, the work in [75] proposed a conceptual framework for regulating AI and algorithmic systems.

The idea is to create social contracts between stakeholders that can hold an algorithmic system (and its developers) accountable and allow voting and negotiation on trade-offs (e.g., should a feature that increases pedestrian safety in autonomous vehicles but decrease passenger safety be implemented? Or should a feature of a system that decreases data privacy but increases public safety be deployed?). The idea is to: ‘to build institutions and tools that put the society in-the-loop of algorithmic systems, and allows us to program, debug, and monitor the algorithmic social contract between humans and governance algorithms.’

What is proposed is for tools to be developed that can take technical aspects of algorithms and present that to the general public so that they can be engaged in influencing the effect and behaviour of the algorithms - effectively crowdsourcing ethics, an approach used elsewhere [49]. The general approach of combining machine-learned representations and human perspectives has also been called lensing.585858https://www.media.mit.edu/videos/2017-05-18-karthik-dinakar/

Tim O’Reilly’s 2016 book “Beyond Transparency: Open Data and the Future of Civic Innovation” proposes the idea of algorithmic regulation,595959http://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/ where algorithmic regulation is successful when there are: “(1) a deep understanding of the desired outcome, (2) real-time measurement to determine if that outcome is being achieved, (3) algorithms (i.e. a set of rules) that make adjustments based on new data, and (4) periodic, deeper analysis of whether the algorithms themselves are correct and performing as expected.” The actual processes to achieve the above is a still an unresolved socio-technical challenge, in itself an area of research.

3.6 Code of Ethics and Guidelines for IoT Developers

Rather than building ethical behaviour into machines, ethical guidelines are also useful for the developers of the technology. There are codes of ethics for robotics engineers,606060https://web.wpi.edu/Pubs/E-project/Available/E-project-030410-172744/unrestricted/
A_Code_of_Ethics_for_Robotics_Engineers.pdf
and more recently, the Asilomar Principles for AI research. These principles were developed in conjunction with the 2017 Asilomar conference and relates to ethics in AI R&D.616161https://futureoflife.org/ai-principles/ The principles cover safety, transparency, privacy, incorporating human values and maintaining human control. The notion of how to imbue algorithms and systems with human values is a recent research topic.626262http://www.valuesincomputing.org/ The above appears to provide a morally sound path for AI R&D and AI applications, and for IoT devices with AI capabilities. Code of ethics for IoT is also being discussed.636363See the interview at https://www.theatlantic.com/technology/archive/2017/05/internet-of-things-ethics/524802/,
and EU discussions at
http://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetailDoc&id=7607&no=4
An IoT design manifesto646464https://www.iotmanifesto.com/ presents a range of general design principles for IoT developers. The IoT Alliance Australia has provided security guidelines.656565https://www.iot.org.au/wp/wp-content/uploads/2016/12/IoTAA-Security-Guideline-V1.2.pdf

The German Federal Minister of Transport and Digital Infrastructure appointed a national ethics committee for automated and connected driving which presented a code of ethics for automated and connected driving [57]. The ethical guidelines highlights a range of principles including “ …Technological development obeys the principle of personal autonomy, which means that individuals enjoy freedom of action for which they themselves are responsible”, i.e., personal autonomy is a key principle for ethical technology. Also, autonomous cars can improve the mobility of the disabled, and so, has ethical benefits. Another guideline stresses that official licensing and monitoring are required for automated driving systems - which may be a direction required for robotics autonomous things in public, from drones to delivery robots. A controversially debated guideline involves “..General programming to reduce the number of personal injuries may be justifiable” even at the cost of harm to some others, which somewhat adopts a utilitarian view of ethics which may not be agreeable to all. Another guideline on accountability: “ that manufacturers or operators are obliged to continuously optimize their systems and also to observe systems they have already delivered and to improve them where this is technologically possible and reasonable” applied to automated vehicles, but suggests possible implications on ethical IoT, in general, raising the question of the responsibility for maintenance and continual upgrades by manufacturers or operators for IoT devices post-deployment. Privacy-by-design is a principle suggested for data protection for connected driving.

It remains to be seen how ethical principles can be software engineered into future systems and whether certification requirements by law is possible, especially in relation to data handling by IoT devices.666666https://www.researchgate.net/publication/322628457
_The_Legal_Challenges_of_Internet_of_Things

The CTIA Cybersecurity Certification Test Plan for IoT devices676767https://api.ctia.org/wp-content/uploads/2018/08/CTIA-IoT-Cybersecurity-Certification-Test-Plan-V1_0.pdf aims to define tests to be conducted on IoT devices for them to be certified in terms of three levels of security features built-in. Other standards and guidelines for IoT data privacy and device security are also being proposed and developed.686868https://www.schneier.com/blog/archives/2017/02/security_and_pr.html

A comprehensive framework to help researchers identify and take into account challenges in ethics and law when researching IoT technologies, or more generally, heterogeneous systems, is given in [40]. Review of research projects by an ethics review board, consideration of national/regulatory guidelines and regulatory frameworks, and wider community engagement are among the suggested actions.

On a more philosophical note is the question: what guidelines and strategies (or pro-social thinking) for the addition of each new device to the Internet of Things can encourage favourable evolution of the Internet of Things even as it is being built? This is a generally challenging issue, especially in a competitive world, but the mechanisms of reciprocity, spatial selection, multilevel selection and kin selection are known to encourage cooperation [76]. Prosocial preferences sometimes do not explain human cooperation [16], and the question of how favourable human cooperation can happen continues to be explored, even from the viewpoint of models from statistical physics [71].

The work in [90] argues for embedding ethics into the development of robotics in healthcare via the concept of Responsible Research and Innovation (RRI). The RRI provides a toolkit of questions that helps to identify, understand and address ethical questions when applied to this domain.

3.7 Summary and Discussion

In summary, we make the following observations:

  • A multi-pronged approach. Table I summarises the above discussion detailing the ideas and their main methods with their key advantages and technical challenges. It can be seen that each idea has advantages and challenges, and they could complement each other so that combinations of ideas could be a way forward. Combining process and artifact strategies would mean taking into account ethical guidelines and practices in the development of IoT devices, and where applicable, also building functionality into the device which allows the device to behave in an ethical manner (according to an agreed criteria) during operation. Devices can be built to work within the constraints of their enveloping environment, with user-informed limitations and clear expectations in terms of applicability, configurability, and behaviour. Developers could encode rules for ethical behaviour, but only after engagement and consultation with the community and stakeholders on what rules are relevant, based on a transparent and open process (e.g., consultative processes, technology trials, crowdsourcing viewpoints or online workshops). White or gray boxed devices could allow end-user intelligibility, consent and configurability, so that users retain a desired degree of control. Individual IoT devices should be secured against certain cyber-attacks, and then the data they collect should be handled in a way that is intelligible and configurable by the user, according to best practice standards, and when they take action, it should be in agreement with acceptable social norms, and auditable.

  • Context is key. Many people could be involved in an IoT ecosystem, including developers, IoT device retailers, IoT system administrators, IoT system maintainers, end-users, local community and society at large. Society and communities can be affected by the deployment of such IoT systems in public, e.g., autonomous vehicles and robots in public, and so, the broader context of deployment needs to be considered.

    Moreover, what is considered ethical behaviour might depend on the context of operation and the application - a device’s action might be be considered ethical in one context but unethical in another, as also noted in [1] with regards to the use of location-based services. Broader contexts of operations include local culture, norms, or application domain (e.g., IoT in health, transport, or finance would have different rules for ethical behaviour); hence, it would require multiple levels of norms and ethical rules to guide the design and development of IoT devices and ecosystems: a basic ethical standard could apply (e.g., basic security built into devices, basic user-definable data handling options, and basic action tracking), and then additional configurable options for context-specific ethical behaviour added.

  • Ethical considerations with autonomy. Guidelines for developers could encourage thinking through the following, and what is built into an artifact to achieve ethical algorithmic behaviour could incorporate features that take into account at least the following:

    • security of data and physical security as impacted on by device actions,

    • privacy of user data and device actions that impinge on privacy,

    • consequences of over-reliance or human attachment to IoT devices,

    • algorithmic bias and design bias, and fairness of device actions,

    • the possible need to engage not just end-users but anyone affected by the IoT deployment, e.g., via crowdsourcing viewpoints pre-development, and obtaining feedback from users and society at large post-deployment,

    • user choice and freedom retained, including allowing user adjustments to ethical behaviour (e.g., opt in and out, adequate range of options, and designing devices with ethical settings),

    • end-user experience including user intelligibility, scrutability and explainability when needed, usability not just for certain groups of people, user control over data management and device behaviour, and appropriate manual overrides696969https://cacm.acm.org/blogs/blog-cacm/238899-the-autocracy-of-autonomous-systems/fulltext,

    • accountability for device actions, including legal and moral responsibilities, and support for traceability of actions,

    • implications and possible unintended effects of cooperation among devices, e.g., where physical actions from multiple devices could mutually interfere, and the extent of data sharing during communications,

    • deployment for long-term use (if applicable) and updatability, arising from security updates, improvements from feedback, adapting to changing human needs, policy changes, and

    • ethical consequences of autonomous action in IoT deployments (from physical movements to driving in certain ways).

Idea Methods Key Advantages Key Challenges Selected Related Work
Build Behaviour into Artifact & Validate
Designing and Programming
Ethical Behaviour
rule-based,
game-theoretic calculations,
ethics settings,
ethical design templates
algorithmic or declarative representation of ethical behaviour, user control explicitly considered in artifact design difficult for a set of rules to be complete, data used in development (e.g., to train Machine Learning models used in IoT devices) might be inadequate, hard for situations to be quantified, raises the question of who decides what is ethical [7, 6, 2, 88, 83, 48, 19, 36, 12, 85]
Enveloping setting physical / cyber-physical boundaries of operation reducing complexity of operating environments,
sets expectations in behaviour and contexts of trustworthy operation
may be hard to create suitable envelopes that do not hinder functioning of IoT systems [33] (though originally proposed to achieve better AI systems)
Whitebox
Algorithms
improve transparency,
detect algorithmic bias
greater traceability and accountability (possibly allow engagement with non-developers) transparency does not equate understandability, scrutability does not equate user control, [104, 61, 87, 94]
Blackbox
Validation
cognitive testing,
simulation
heuristic evaluation
where white-boxing is difficult,
basis for certification
difficult to consider all cases and situations [47, 26, 66]
Algorithmic
Social Contracts
crowdsourcing ethics,
processes for algorithmic regulation
wider engagement (possibly with non-developers) complex, may be hard to create suitable efficient processes or to obtain adequate participation [75, 49]
Guide Developers
Code of Ethics and Guidelines
for IoT Developers
formal guidelines,
regulations,
community best practice for developers
highlights ethical considerations in development application or domain specific considerations required German ethics code for automated and connected driving [57], IoT data privacy guidelines and regulations [100],
(also, Code of ethics for robotics engineers,
Asilomar Principles, IoT design manifesto, IoT Alliance Australia Security Guideline,
design-for-responsibility), RRI [90]
Table 1: Summary of Ideas to Achieve Ethical algorithmBehaviour in the IoT with key advantages and challenges.

4 Conclusion and Future Work

This paper has reviewed a range of ethical concerns with IoT, including concerns that arises when IoT technology is combined with robotics and AI technology (including machine learning) and autonomous vehicles. The concerns include informational security, data privacy, moral dilemmas, roboethics, algorithmic bias when algorithms are used for decision-making and control of IoT devices, as well as risks in cooperative IoT.

The paper has also reviewed approaches that have been proposed to address these ethical concerns, including

  • programming approaches to add ethical behaviour to devices, including adding moral reasoning capabilities to machines, and configuring devices with user ethics preferences,

  • detection and prevention of algorithmic bias, via accountability models and transparency,

  • behaviour-based validation techniques,

  • the notion of algorithmic social contracts, and crowdsourcing solutions to ethical issues,

  • the idea of enveloping systems, and

  • developing guidelines and proposals for regulations, and codes of ethics, to encourage ethical developers and ethical development of IoT devices, and requiring security and privacy measures in such devices. Suitable data privacy laws in the IoT context, secure-by-design, privacy-by-design, ethical-by-design and design-for-responsibility principles will also be needed.

A multi-pronged approach could be explored to achieve ethical IoT behaviour in a specific context. More research is required to explore combined approaches, and to create a framework of multiple levels of ethical rules and guidelines that could cater for the context-specific nature of what constitutes ethical behaviour.

This paper has not considered in detail legislation and the law involving robots and AI, approaches of which could be considered for intelligent IoT systems, which are addressed in depth elsewhere [70]. Also, the notion of IoT policing has not been discussed, in the sense of run-time monitoring of devices to detect misbehaving devices, perhaps with the use of sentinel devices, as well as policy enforcement, penalties imposed on anti-social IoT devices (e.g., game-theoretic grim-trigger type strategies, and other types of sanctions for autonomous systems [64]). Social equity and social inequality are two concerns of the social ethics of the Internet of Things which have been discussed elsewhere [82] but not detailed here. Sustainability of IoT deployments [92]707070https://www.computerworld.com.au/article/561064/hidden-environmental-cost-internet-things/ and the use of IoT for sustainability [14] have not been extensively discussed here, which have socio-ethical implications.

The challenge of building ethical things in the IoT that act autonomously yet ethically will also benefit from on-going research in building ethics into AI decision-making as reviewed in [107], which includes individual ethical decision frameworks, collective ethical decision frameworks, ethics in human-AI interactions and systems to explore ethical dilemmas.

Outstanding socio-technical challenges remain if IoT devices are to behave ethically and be used ethically, for IoT developers and IoT users. Ethical considerations would need to be factored into future IoT software and hardware development processes, according to upcoming certification practices, ethics policies, and regulatory frameworks, which are still to be developed. Particular domains or contexts would require domain-specific guidelines and ethical considerations.

While we have addressed mainly ethical behaviour for IoT device operations and the algorithms therein, there are ethical issues concerning the post-deployment and maintenance of IoT devices, where retailers or manufacturers could take responsibility.

References

  • [1] R. Abbas, K. Michael, and M. Michael (2014) Using a social-ethical framework to evaluate location-based services in an internet of things world. International Review of Information Ethics 22, pp. 42–73. Cited by: §2.7, 2nd item.
  • [2] N. Ajmeri, H. Guo, P. K. Murukannaiah, and M. P. Singh (2018-03) Designing ethical personal agents. IEEE Internet Computing 22 (2), pp. 16–22. External Links: Document, ISSN 1089-7801 Cited by: §3.1.1, Table 1.
  • [3] F. A. Alaba, M. Othman, I. A. T. Hashem, and F. Alotaibi (2017) Internet of things security: a survey. Journal of Network and Computer Applications 88, pp. 10 – 28. Cited by: §2.1.
  • [4] M. S. Ali, K. Dolui, and F. Antonelli (2017) IoT data privacy via blockchains and ipfs. In Proceedings of the Seventh International Conference on the Internet of Things, IoT ’17, New York, NY, USA, pp. 14:1–14:7. External Links: ISBN 978-1-4503-5318-2, Link, Document Cited by: §2.2.
  • [5] F. Allhoff and A. Henschke (2018) The internet of things: foundational ethical issues. Internet of Things 1-2, pp. 55 – 66. Cited by: §1.1.
  • [6] M. Anderson and S. L. Anderson (2008) ETHEL: toward a principled ethical eldercare system. See DBLP:conf/aaaifs/2008-2, pp. 4–11. External Links: Link Cited by: §3.1.1, Table 1.
  • [7] M. Anderson and S. L. Anderson (2010) ROBOT be good. Scientific American 303 (4), pp. 72–77. External Links: ISSN 00368733, 19467087, Link Cited by: §3.1.1, Table 1.
  • [8] M. Anderson and S. L. Anderson (2011) Machine ethics. 1st edition, Cambridge University Press, New York, NY, USA. External Links: ISBN 0521112354, 9780521112352 Cited by: §3.1.1.
  • [9] Y. Atwady and M. Hammoudeh (2017) A survey on authentication techniques for the internet of things. In Proceedings of the International Conference on Future Networks and Distributed Systems, ICFNDS ’17, New York, NY, USA. External Links: ISBN 978-1-4503-4844-7, Link, Document Cited by: §2.1.
  • [10] E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J. Bonnefon, and I. Rahwan (2018) The moral machine experiment. Nature 563 (7729), pp. 59–64. Cited by: §3.1.1.
  • [11] S. A. Bagloee, M. Tavana, M. Asadi, and T. Oliver (2016-12-01) Autonomous vehicles: challenges, opportunities, and future implications for transportation policies. Journal of Modern Transportation 24 (4), pp. 284–303. External Links: ISSN 2196-0577, Document, Link Cited by: §2.3.
  • [12] G. Baldini, M. Botterman, R. Neisse, and M. Tallacchini (2016-01-21) Ethical design in the internet of things. Science and Engineering Ethics. External Links: ISSN 1471-5546, Document, Link Cited by: §3.1.4, §3.1.4, Table 1.
  • [13] F. Berman and V. G. Cerf (2017-01) Social and ethical behavior in the internet of things. Commun. ACM 60 (2), pp. 6–7. External Links: ISSN 0001-0782, Link, Document Cited by: §1.
  • [14] S. E. Bibri (2018) The iot for smart sustainable cities of the future: an analytical framework for sensor-based big data applications for environmental sustainability. Sustainable Cities and Society 38, pp. 230 – 253. External Links: ISSN 2210-6707, Document, Link Cited by: §4.
  • [15] J. Bonnefon, A. Shariff, and I. Rahwan (2016) The social dilemma of autonomous vehicles. Science 352 (6293), pp. 1573–1576. External Links: Document, ISSN 0036-8075, Link, http://science.sciencemag.org/content/352/6293/1573.full.pdf Cited by: §2.3.
  • [16] M. N. Burton-Chellew and S. A. West (2013) Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences 110 (1), pp. 216–221. Cited by: §3.6.
  • [17] X. Caron, R. Bosua, S. B. Maynard, and A. Ahmad (2016) The internet of things (iot) and its impact on individual privacy: an australian perspective. Computer Law & Security Review 32 (1), pp. 4 – 15. Cited by: §2.1.
  • [18] A. Chamberlain, A. Crabtree, H. Haddadi, and R. Mortier (2017-08-14) Special theme on privacy and the internet of things. Personal and Ubiquitous Computing. External Links: ISSN 1617-4917, Document, Link Cited by: 2nd item.
  • [19] G. Contissa, F. Lagioia, and G. Sartor (2017-09-01) The ethical knob: ethically-customisable automated vehicles and the law. Artificial Intelligence and Law 25 (3), pp. 365–378. External Links: Document, ISSN 1572-8382, Link Cited by: §3.1.3, Table 1.
  • [20] V. Cristea, C. Dobre, and F. Pop (2013) Context-aware environments for the internet of things. In Internet of Things and Inter-cooperative Computational Technologies for Collective Intelligence, N. Bessis, F. Xhafa, D. Varvarigou, R. Hill, and M. Li (Eds.), pp. 25–49. External Links: ISBN 978-3-642-34952-2, Document, Link Cited by: 7th item.
  • [21] D. Danks and A. J. London (2017) Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pp. 4691–4697. External Links: ISBN 978-0-9992411-0-3, Link Cited by: §2.5.
  • [22] L. Dennis, M. Fisher, M. Slavkovik, and M. Webster (2016) Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems 77, pp. 1 – 14. External Links: Document, ISSN 0921-8890, Link Cited by: §3.1.1.
  • [23] P. Desai, S. W. Loke, A. Desai, and J. Singh (2013) CARAVAN: congestion avoidance and route allocation using virtual agent negotiation. IEEE Trans. Intelligent Transportation Systems 14 (3), pp. 1197–1207. External Links: Link, Document Cited by: §2.6.
  • [24] M. Díaz, C. Martín, and B. Rubio (2016) State-of-the-art, challenges, and open issues in the integration of internet of things and cloud computing. Journal of Network and Computer Applications 67, pp. 99 – 117. Cited by: §2.1, §2.2.
  • [25] S. Dutta (2017) Striking a balance between usability and cyber-security in iot devices. Master’s Thesis, , Massachusetts Institute of Technology, USA. Cited by: §2.1.
  • [26] C. Ebert and M. Weyrich (2019-Sep.) Validation of autonomous systems. IEEE Software 36 (5), pp. 15–23. External Links: Document, ISSN Cited by: §3.4, Table 1.
  • [27] C. Ebert and M. Weyrich (2019-09-01) Validation of automated and autonomous vehicles. ATZelectronics worldwide 14 (9), pp. 26–31. External Links: ISSN 2524-8804, Document, Link Cited by: §3.4.
  • [28] C. Enemark (2013) Armed drones and the ethics of war: military virtue in a post-heroic age. Routledge. Cited by: §2.4.4.
  • [29] V. Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press. Cited by: §2.5.
  • [30] I. Farris, R. Girau, L. Militano, M. Nitti, L. Atzori, A. Iera, and G. Morabito (2015-11) Social virtual objects in the edge cloud. IEEE Cloud Computing 2 (6), pp. 20–28. External Links: Document, ISSN 2325-6095 Cited by: 1st item.
  • [31] T. M. Fernandez-Carames and P. Fraga-Lamas (2018) A review on the use of blockchain for the internet of things. IEEE Access 6 (), pp. 32979–33001. External Links: Document, ISSN 2169-3536 Cited by: §2.2.
  • [32] I. Fister, K. Ljubič, P. N. Suganthan, M. Perc, and I. Fister (2015) Computational intelligence in sports: challenges and opportunities within a new research domain. Applied Mathematics and Computation 262, pp. 178 – 186. Cited by: §2.2.
  • [33] L. Floridi (2019-03) What the near future of artificial intelligence could be. Philosophy & Technology 32 (1), pp. 1–15. Cited by: §3.2, Table 1.
  • [34] M. Ge, J. B. Hong, W. Guttmann, and D. S. Kim (2017) A framework for automating security analysis of the internet of things. Journal of Network and Computer Applications 83, pp. 12 – 27. Cited by: §2.1.
  • [35] J. C. Gerdes and S. M. Thornton (2016) Implementable ethics for autonomous vehicles. In Autonomous Driving: Technical, Legal and Social Aspects, M. Maurer, J. C. Gerdes, B. Lenz, and H. Winner (Eds.), pp. 87–102. External Links: ISBN 978-3-662-48847-8, Document, Link Cited by: §3.1.1.
  • [36] J. Gogoll and J. F. Müller (2017-06-01) Autonomous cars: in favor of a mandatory ethics setting. Science and Engineering Ethics 23 (3), pp. 681–700. External Links: Document, ISSN 1471-5546, Link Cited by: §3.1.3, Table 1.
  • [37] K. N. Griggs, O. Ossipova, C. P. Kohlios, A. N. Baccarini, E. A. Howson, and T. Hayajneh (2018-06) Healthcare blockchain system using smart contracts for secure automated remote patient monitoring. Journal of Medical Systems 42 (7), pp. 130. Cited by: §2.2.
  • [38] D. J. Gunkel (2017-10-17) The other question: can and should robots have rights?. Ethics and Information Technology. External Links: Document, ISSN 1572-8439, Link Cited by: §2.4.1.
  • [39] D. Halperin, T. S. Heydt-Benjamin, B. Ransford, S. S. Clark, B. Defend, W. Morgan, K. Fu, T. Kohno, and W. H. Maisel (2008) Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses. In Proceedings of the 2008 IEEE Symposium on Security and Privacy (SP 2008), Vol. , pp. 129–142. Cited by: §2.2.
  • [40] J. Happa, J. R. C. Nurse, M. Goldsmith, S. Creese, and R. Williams (2018-03) An ethics framework for research into heterogeneous systems. In Proceedings of the Living in the Internet of Things: Cybersecurity of the IoT - 2018, Vol. , pp. 1–8. External Links: Document, ISSN Cited by: §3.6.
  • [41] S. Hermsen, J. H. Frost, E. Robinson, S. Higgs, M. Mars, and R. C. J. Hermans (2016-2018/09/26) Evaluation of a smart fork to decelerate eating rate. Journal of the Academy of Nutrition and Dietetics 116 (7), pp. 1066–1067. Cited by: 2nd item.
  • [42] A. Humayed, J. Lin, F. Li, and B. Luo (2017-12) Cyber-physical systems security - a survey. IEEE Internet of Things Journal 4 (6), pp. 1802–1831. External Links: Document, ISSN Cited by: §2.1.
  • [43] P. P. Jayaraman, X. Yang, A. Yavari, D. Georgakopoulos, and X. Yi (2017) Privacy preserving internet of things: from privacy techniques to a blueprint architecture and efficient implementation. Future Generation Computer Systems 76, pp. 540 – 549. Cited by: §2.1, §2.2.
  • [44] A. Kadomura, C. Li, Y. Chen, K. Tsukada, I. Siio, and H. Chu (2013) Sensing fork: eating behavior detection utensil and mobile persuasive game. In Proceedings of the CHI’13 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’13, New York, NY, USA, pp. 1551–1556. External Links: ISBN 978-1-4503-1952-2, Link, Document Cited by: 2nd item.
  • [45] P. Koopman, U. Ferrell, F. Fratrik, and M. Wagner (2019) A safety standard approach for fully autonomous vehicles. In Computer Safety, Reliability, and Security, A. Romanovsky, E. Troubitsyna, I. Gashi, E. Schoitsch, and F. Bitsch (Eds.), Cham, pp. 326–332. External Links: ISBN 978-3-030-26250-1 Cited by: §3.4.
  • [46] F. Kraemer, K. van Overveld, and M. Peterson (2011-09-01) Is there an ethics of algorithms?. Ethics and Information Technology 13 (3), pp. 251–260. External Links: Document, ISSN 1572-8439, Link Cited by: §2.5.2.
  • [47] K. Kufieta and M. Ditze (2017) A virtual environment for the development and validation of highly automated driving systems. In 17. Internationales Stuttgarter Symposium, M. Bargende, H. Reuss, and J. Wiedemann (Eds.), Wiesbaden, pp. 1391–1401. External Links: ISBN 978-3-658-16988-6 Cited by: §3.4, Table 1.
  • [48] D. Leben (2017-06) A rawlsian algorithm for autonomous vehicles. Ethics and Information Technology 19 (2), pp. 107–115. External Links: ISSN 1388-1957, Link, Document Cited by: §3.1.2, Table 1.
  • [49] H. Lieberman, K. Dinakar, and B. Jones (2013) Crowdsourced ethics with personalized story matching. In Proceedings of the CHI’13 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’13, New York, NY, USA, pp. 709–714. External Links: ISBN 978-1-4503-1952-2, Link, Document Cited by: §3.5, Table 1.
  • [50] A. Lima, F. Rocha, M. Völp, and P. Esteves-Veríssimo (2016) Towards safe and secure autonomous and cooperative vehicle ecosystems. In Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy, CPS-SPC ’16, New York, NY, USA, pp. 59–70. External Links: ISBN 978-1-4503-4568-2, Link, Document Cited by: §2.3.
  • [51] P. Lin, K. Abney, and G. A. Bekey (2014) Robot ethics: the ethical and social implications of robotics. The MIT Press. External Links: ISBN 026252600X, 9780262526005 Cited by: §2.4.
  • [52] P. Lin, K. Abney, and R. Jenkins (2017) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press. Cited by: §2.4.
  • [53] P. Lin (2016) Why ethics matters for autonomous cars. In Autonomous Driving: Technical, Legal and Social Aspects, M. Maurer, J. C. Gerdes, B. Lenz, and H. Winner (Eds.), pp. 69–85. External Links: ISBN 978-3-662-48847-8, Document, Link Cited by: §2.3.
  • [54] Z. Lin and L. Dong (2017) Clarifying trust in social internet of things. CoRR abs/1704.03554. External Links: Link, 1704.03554 Cited by: 1st item, §2.6.
  • [55] H. Lipson and M. Kurman (2016) Driverless: Intelligent Cars and the Road Ahead. MIT Press. Cited by: §1.1.
  • [56] S. Loke (2006) Context-aware pervasive systems. Auerbach Publications, Boston, MA, USA. External Links: ISBN 0849372550 Cited by: 7th item.
  • [57] C. Luetge (2017) The german ethics code for automated and connected driving. Philosophy and Technology 30 (4), pp. 547–558. Cited by: §3.6, Table 1.
  • [58] L. Malina, J. Hajny, R. Fujdiak, and J. Hosek (2016) On perspective of security and privacy-preserving solutions in the internet of things. Computer Networks 102, pp. 83 – 95. Cited by: §2.1.
  • [59] T. May (1994) The concept of autonomy. American Philosophical Quarterly 31 (2), pp. 133–144. External Links: ISSN 00030481, Link Cited by: §2.7.
  • [60] R. Minerva, A. Biru, and D. Rotondi IEEE Internet Initiative. External Links: Link Cited by: §1.
  • [61] S. Mirzamohammadi, J. A. Chen, A. A. Sani, S. Mehrotra, and G. Tsudik (2017) Ditio: trustworthy auditing of sensor activities in mobile & IoT devices. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, SenSys ’17, New York, NY, USA, pp. 28:1–28:14. External Links: ISBN 978-1-4503-5459-2, Link, Document Cited by: §3.3.1, Table 1.
  • [62] B.D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi (2016) The ethics of algorithms: mapping the debate. Big Data & Society 3 (2), pp. 2053951716679679. External Links: Document, https://doi.org/10.1177/2053951716679679, Link Cited by: §2.5.
  • [63] B. Mittelstadt (2017) Ethics of the health-related internet of things: a narrative review. Ethics and Information Technology 19 (3), pp. 157–175. External Links: ISSN 1572-8439 Cited by: §2.2.
  • [64] L. G. Nardin, T. Balke-Visser, N. Ajmeri, A. K. Kalia, J. S. Sichman, and M. P. Singh (2016) Classifying sanctions and designing a conceptual sanctioning process model for socio-technical systems.. Knowledge Eng. Review 31 (2), pp. 142–166. External Links: Link Cited by: §4.
  • [65] A. M. Nasser, D. Ma, and P. Muralidharan (2017) An approach for building security resilience in autosar based safety critical systems. Journal of Cyber Security and Mobility 6 (3), pp. 271–304. External Links: Link Cited by: §2.3.
  • [66] F. Naujoks, K. Wiedemann, N. Schömig, S. Hergeth, and A. Keinath (2019) Towards guidelines and verification methods for automated vehicle hmis. Transportation Research Part F: Traffic Psychology and Behaviour 60, pp. 121 – 136. External Links: ISSN 1369-8478, Document, Link Cited by: §3.4, Table 1.
  • [67] T. Nawrath, D. Fischer, and B. Markscheffel (2016-12) Privacy-sensitive data in connected cars. In Proceedings of the 2016 11th International Conference for Internet Technology and Secured Transactions (ICITST), Vol. , pp. 392–393. External Links: Document, ISSN Cited by: §2.3.
  • [68] S. U. Noble (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Cited by: §2.5.
  • [69] C. O’Neil (2016) Weapons of Math Destruction. Crown Books. Cited by: §2.5.
  • [70] U. Pagallo (2013) The laws of robots - crimes, contracts, and torts. Law, Governance and Technology Series, Vol. 10, Springer. External Links: Link, Document, ISBN 978-94-007-6563-4 Cited by: §4.
  • [71] M. Perc, J. J. Jordan, D. G. Rand, Z. Wang, S. Boccaletti, and A. Szolnoki (2017) Statistical physics of human cooperation. Physics Reports 687, pp. 1 – 51. Cited by: §3.6.
  • [72] D. Popescul and M. Georgescu (2013) INTERNET of things - some ethical issues. The USV Annals of Economics and Public Administration 13 (2(18)), pp. 210–216. External Links: Link Cited by: §2.2.
  • [73] M. Pticek, V. Podobnik, and G. Jezic (2016) Beyond the internet of things: the social networking of machines. International Journal of Distributed Sensor Networks 12 (6), pp. 8178417. External Links: Document, https://doi.org/10.1155/2016/8178417, Link Cited by: 1st item.
  • [74] I. Rahwan, M. Cebrian, N. Obradovich, J. Bongard, J. Bonnefon, C. Breazeal, J. Crandall, N. Christakis, I. Couzin, M. Jackson, N. Jennings, E. Kamar, I. Kloumann, H. Larochelle, D. Lazer, R. McElreath, A. Mislove, D. Parkes, A. Pentland, and M. Wellman (2019-04) Machine behaviour. Nature 568, pp. 477–486. External Links: Document Cited by: §3.4.
  • [75] I. Rahwan (2017-08-17) Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology. External Links: Document, ISSN 1572-8439, Link Cited by: §3.5, Table 1.
  • [76] D. G. Rand and M. A. Nowak (2013) Human cooperation. Trends in Cognitive Sciences 17 (8), pp. 413 – 425. Cited by: §3.6.
  • [77] P. P. Ray (2016) Internet of robotic things: concept, technologies, and challenges. IEEE Access 4 (), pp. 9489–9500. External Links: Document, ISSN Cited by: §1.1.
  • [78] A. Reyna, C. Martín, J. Chen, E. Soler, and M. Díaz (2018) On blockchain and its integration with iot. challenges and opportunities. Future Generation Computer Systems 88, pp. 173 – 190. Cited by: §2.2.
  • [79] M. Seliem, K. Elgazzar, and K. Khalil (2018-11) Towards privacy preserving iot environments: a survey. Wireless Communications and Mobile Computing 2018, pp. 15. External Links: Document Cited by: §3.1.4.
  • [80] A. R. Sfar, E. Natalizio, Y. Challal, and Z. Chtourou (2018) A roadmap for security challenges in the internet of things. Digital Communications and Networks 4 (2), pp. 118 – 137. Cited by: §2.1.
  • [81] K. Sha, W. Wei, T. A. Yang, Z. Wang, and W. Shi (2018) On security challenges and open issues in internet of things. Future Generation Computer Systems 83, pp. 326 – 337. Cited by: §2.1.
  • [82] A. Shahraki and Ø. Haugen (2018-05) Social ethics in internet of things: an outline and review. In Proceedings of the 2018 IEEE Industrial Cyber-Physical Systems (ICPS) Conference, Vol. , pp. 509–516. External Links: Document, ISSN Cited by: §4.
  • [83] S. Shalev-Shwartz, S. Shammah, and A. Shashua (2017) On a formal model of safe and scalable self-driving cars. CoRR abs/1708.06374. External Links: Link, 1708.06374 Cited by: §3.1.1, Table 1.
  • [84] S. Sicari, A. Rizzardi, L.A. Grieco, and A. Coen-Porisini (2015) Security, privacy and trust in internet of things: the road ahead. Computer Networks 76, pp. 146 – 164. Cited by: §2.1, §2.2.
  • [85] S. Sicari, A. Rizzardi, D. Miorandi, and A. Coen-Porisini (2017-12) Dynamic policies in internet of things: enforcement and synchronization. IEEE Internet of Things Journal 4 (6), pp. 2228–2238. External Links: Document, ISSN Cited by: §3.1.4, Table 1.
  • [86] P. Simoens, M. Dragone, and A. Saffiotti (2018) The internet of robotic things: a review of concept, added value and applications. International Journal of Advanced Robotic Systems 15 (1), pp. 1729881418759424. Cited by: §1.1.
  • [87] J. Singh, C. Millard, C. Reed, J. Cobbe, and J. Crowcroft (2018-07) Accountability in the IoT: systems, law, and ways forward. Computer 51 (7), pp. 54–65. External Links: Document, Link, ISSN 0018-9162 Cited by: §3.3.1, Table 1.
  • [88] M. P. Singh and A. K. Chopra (2017-06) The internet of things and multiagent systems: decentralized intelligence in distributed computing. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Vol. , pp. 1738–1747. External Links: Document, ISSN 1063-6927 Cited by: §1.1, §3.1.1, Table 1.
  • [89] R. Sparrow and M. Howard (2017) When human beings are like drunk robots: driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies 80, pp. 206 – 215. External Links: ISSN 0968-090X, Document, Link Cited by: §2.3.
  • [90] B. C. Stahl and M. Coeckelbergh (2016) Ethics of healthcare robotics: towards responsible research and innovation. Robotics and Autonomous Systems 86, pp. 152 – 161. External Links: ISSN 0921-8890, Document, Link Cited by: §3.6, Table 1.
  • [91] B. C. Stahl, J. Timmermans, and B. D. Mittelstadt (2016-02) The ethics of computing: a survey of the computing-oriented literature. ACM Comput. Surv. 48 (4), pp. 55:1–55:38. External Links: ISSN 0360-0300, Link, Document Cited by: §1.1.
  • [92] M. Stead, P. Coulton, J. Lindley, and C. Coulton (2019-02) The little book of sustainability for the internet of things. External Links: ISBN 978-1-86220-360-0 Cited by: §4.
  • [93] A. Taivalsaari and T. Mikkonen (2017-01) A roadmap to the programmable world: software challenges in the iot era. IEEE Software 34 (1), pp. 72–80. External Links: Document, ISSN 0740-7459 Cited by: 1st item.
  • [94] S. Tan, R. Caruana, G. Hooker, and Y. Lou (2017) Detecting bias in black-box models using transparent model distillation. CoRR abs/1710.06169. External Links: Link, 1710.06169 Cited by: §3.3.2, Table 1.
  • [95] B. K. Tripathy, D. Dutta, and C. Tazivazvino (2016) On the research and development of social internet of things. In Internet of Things (IoT) in 5G Mobile Technologies, C. X. Mavromoustakis, G. Mastorakis, and J. M. Batalla (Eds.), pp. 153–173. External Links: Document, ISBN 978-3-319-30913-2, Link Cited by: 1st item.
  • [96] M. Trnka, T. Cerny, and N. Stickney (2018) Survey of authentication and authorization for the internet of things,. Security and Communication Networks 2018 (), pp. . External Links: Link Cited by: §2.1.
  • [97] C. A. Tschider (2018-02) Regulating the iot: discrimination, privacy, and cybersecurity in the artificial intelligence age. Denver University Law Review (), pp. . External Links: Document, Link Cited by: §2.5.
  • [98] S. G. Tzafestas (2016) Roboethics: a navigating overview. Springer. Cited by: §2.4.
  • [99] D. Vanderelst and A. Winfield (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research 48, pp. 56 – 66. Note: Cognitive Architectures for Artificial Minds External Links: Document, ISSN 1389-0417, Link Cited by: §3.1.1.
  • [100] L. Vegh (2018-06) A survey of privacy and security issues for the internet of things in the GDPR era. In Proceedings of the 2018 International Conference on Communications (COMM), Vol. , pp. 453–458. External Links: Document, ISSN Cited by: §2.1, §2.2, Table 1.
  • [101] W. Wallach and C. Allen (2010) Moral machines: teaching robots right from wrong. Oxford University Press, Inc., New York, NY, USA. External Links: ISBN 0199737975, 9780199737970 Cited by: §3.1.1.
  • [102] R. H. Weber (2015) Internet of things: privacy issues revisited. Computer Law & Security Review 31 (5), pp. 618 – 627. Cited by: §2.1.
  • [103] B. D. Weinberg, G. R. Milne, Y. G. Andonova, and F. M. Hajjat (2015) Internet of things: convenience vs. privacy and secrecy. Business Horizons 58 (6), pp. 615 – 624. Cited by: §2.1.
  • [104] J. Wilson, R. S. Wahby, H. Corrigan-Gibbs, D. Boneh, P. Levis, and K. Winstein (2017) Trust but verify: auditing the secure internet of things. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys ’17, New York, NY, USA, pp. 464–474. External Links: ISBN 978-1-4503-4928-4, Link, Document Cited by: §3.3.1, Table 1.
  • [105] Q. Wu, G. Ding, Y. Xu, S. Feng, Z. Du, J. Wang, and K. Long (2014-04) Cognitive internet of things: a new paradigm beyond connection. IEEE Internet of Things Journal 1 (2), pp. 129–143. External Links: Document, ISSN 2327-4662 Cited by: 3rd item, §1.1.
  • [106] B. Yu, J. Wright, S. Nepal, L. Zhu, J. Liu, and R. Ranjan (2018-07) IoTChain: establishing trust in the internet of things ecosystem using blockchain. IEEE Cloud Computing 5 (4), pp. 12–23. External Links: Document, ISSN 2325-6095 Cited by: §2.2.
  • [107] H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser, and Q. Yang (2018-07) Building ethics into artificial intelligence. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 5527–5533. Cited by: §1.1, §4.