Suspect AI: Vibraimage, Emotion Recognition Technology, and Algorithmic Opacity

09/01/2020
by   James Wright, et al.
10

Vibraimage is a digital system that quantifies a subject's mental and emotional state by analysing video footage of the movements of their head. Vibraimage is used by police, nuclear power station operators, airport security and psychiatrists in Russia, China, Japan and South Korea, and has been deployed at an Olympic Games, FIFA World Cup, and G7 Summit. Yet there is no reliable evidence that the technology is actually effective; indeed, many claims made about its effects seem unprovable. What exactly does vibraimage measure, and how has it acquired the power to penetrate the highest profile and most sensitive security infrastructure across Russia and Asia? I first trace the development of the emotion recognition industry, before examining attempts by vibraimage's developers and affiliates scientifically to legitimate the technology, concluding that the disciplining power and corporate value of vibraimage is generated through its very opacity, in contrast to increasing demands across the social sciences for transparency. I propose the term 'suspect AI' to describe the growing number of systems like vibraimage that algorithmically classify suspects / non-suspects, yet are themselves deeply suspect. Popularising this term may help resist such technologies' reductivist approaches to 'reading' – and exerting authority over – emotion, intentionality and agency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset