AI reputational risk management
The benefits of AI are many. It can help tackle climate change, strengthen cybersecurity, improve customer service and stop people making abusive comments on Instagram, amongst all manner of other applications. However, AI poses substantial risks, including: unfair or discriminatory algorithms, unreliable or malfunctioning outcomes, misuse of personal or confidential data' greater exposure to cyberattacks, loss of jobs, legal risks and liabilities, direct and indirect reputational risks, including malicious deepfakes. It is likely that these risks will become greater and more reputational in nature as the adoption of AI technologies becomes more mainstream, awareness diversifies and grows, and public opinion consolidates. Appreciating the scope of public skepticism and distrust and, under pressure from government, politicians and regulators, the AI industry is now making some headway on AI ethics. In addition, the risk management industry is looking at AI from a risk perspective, and the PR/communications industry from a communications perspective. However, little exists on the reputational threats posed by AI, or how these should be managed should an incident or crisis occur – an important topic given the volume of AI controversies and the general focus on corporate behaviour and governance. Accordingly, I am pulling together examples of AI controversies driven by or relating to artificial intelligence for an initial report and possible quantitative study and white paper on the topic. Your contribution is welcome. Given the sensitivity of these types of events, please note all contributions should be fair, accurate and supportable.
READ FULL TEXT