Emotion Filtering at the Edge

by   Ranya Aloufi, et al.

Voice controlled devices and services have become very popular in the consumer IoT. Cloud-based speech analysis services extract information from voice inputs using speech recognition techniques. Services providers can thus build very accurate profiles of users' demographic categories, personal preferences, emotional states, etc., and may therefore significantly compromise their privacy. To address this problem, we have developed a privacy-preserving intermediate layer between users and cloud services to sanitize voice input directly at edge devices. We use CycleGAN-based speech conversion to remove sensitive information from raw voice input signals before regenerating neutralized signals for forwarding. We implement and evaluate our emotion filtering approach using a relatively cheap Raspberry Pi 4, and show that performance accuracy is not compromised at the edge. In fact, signals generated at the edge differ only slightly ( 0.16 speech recognition. Experimental evaluation of generated signals show that identification of the emotional state of a speaker can be reduced by  91



page 4


Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants

Voice-enabled interactions provide more human-like experiences in many p...

VoiceMask: Anonymize and Sanitize Voice Input on Mobile Devices

Voice input has been tremendously improving the user experience of mobil...

Paralinguistic Privacy Protection at the Edge

Voice user interfaces and digital assistants are rapidly entering our ho...

Privacy Enhanced Speech Emotion Communication using Deep Learning Aided Edge Computing

Speech emotion sensing in communication networks has a wide range of app...

Privacy Issues in Voice Assistant Ecosystems

Voice assistants have become quite popular lately while in parallel they...

Privacy-preserving Voice Analysis via Disentangled Representations

Voice User Interfaces (VUIs) are increasingly popular and built into sma...

Encrypted Speech Recognition using Deep Polynomial Networks

The cloud-based speech recognition/API provides developers or enterprise...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. The data-flow of the proposed framework during the training and testing phases

The 1907 Franklin Model D roadster.

Voice-controlled IoT devices and smart home assistants have gained huge popularity. Seamless interaction between users and services are enabled by speech recognition. Many IoT devices such as home assistants, smartphones, and smart watches have built-in microphones that listen for user commands, and due to resource limitations on edge devices, speech analysis is usually outsourced to cloud services. However, services providers aim to expand their abilities to understand the additional information about the speakers by developing models that process their voice input and detect their current conditions such as emotion classification and analysis of physical and mental wellbeing. They can collect sensitive behaviour patterns from voice input which include various embedded metadata such as ”the who, when, where, what and how” that may violate user privacy in numerous ways. They may infer a user’s mental state, stress level, smoking habits, overall health conditions, indication of Parkinson’s disease, sleep patterns, and levels of exercise (Peppet, 2014). For instance, Amazon has patented technology that can analyze users’ voice to identify emotions and/or mental health conditions. It allows understanding speaker commands and their emotion characterization to provide highly targeted content (Jin and Wang, 2018). Similarly, Affectiva has developed a multimodal artificial emotional intelligence by combining the analysis of both face and speech as complementary signals to understand the human emotion expression (Aff, [n. d.]). Therefore, privacy-preserving speech analysis is playing an especially important role when it comes to advertising content related to physical or emotional states.

Emotions are a universal aspect of human speech that convey their behaviour. As a consequence of listening to users’ voices and monitoring emotions, resulting critical decision-making may affect their life, ranging from fitness trackers for well-being to suitability for recruitment and open many new privacy issues. In this paper, we propose a privacy-preserving architecture for speech analysis and evaluate it to mitigate the privacy risks of the cloud-based voice analysis services. It serves as a mask of the sensitive emotional patterns in the voice input to prevent services providers from monitoring users’ emotions that associated with their voice.

Our proposed solution is a feature learning and data reconstruction framework to bridge the communication between users edge devices and a service provider cloud. It performs emotion filtering in low cost edge devices while still maintaining the usability of the voice input for the cloud-based services. It includes three components: pre-processor, emotion filter, and generator. Firstly, pre-processor extracts the sensitive features to be hidden and then used as a target to train the transformation model. Emotion filter is an embedded-specific model that uses CycleGAN architecture (Zhu et al., 2017) to transform the voice input style. Finally, the generator uses the output features to re-generate the voice files based on the state-of-the-art vocoder: WORLD (Morise et al., 2016). To evaluate the trade-off between data utility and privacy, the proposed method is tested on an emotion recognition task using the RAVDESS dataset (Livingstone and Russo, 2018). The results show that the proposed solution can decrease the accuracy of the emotion recognition application, while affecting the accuracy of speech recognition and speaker identification techniques only minimally. Contributions of this paper can be summarized as follows:

  • Privacy-preserving emotion filter using CycleGAN that learns how to replace sensitive emotional features of the voice input with corresponding neutral one.

  • Implementation and evaluation at the edge versus cloud to show how it can retain the similar performance accuracy in protecting sensitive information at edge and cloud-based approaches.

Filtering the affect content of the voice signal is critical task to ensure appropriate protections for users of cloud-based voice analysis services. The proposed framework is the first privacy-preserving emotion filter for voice inputs at edge devices to protect the private paralinguistic information of the speaker. It enables users to protect their sensitive emotional data, while benefiting from sharing their non-sensitive data with cloud-based voice analysis services. In addition, we make our code and results available online.111https://github.com/RanyaJumah/Embedded-PP-Speech-Analysis

2. Related Work

Voice versus Privacy Voice is considered to be one of the unique biometric information that has been widely used in various IoT applications. It is a rich resource that discloses several possible states of a speaker, such as emotional state (Schuller et al., 2013), confidence and stress levels, physical condition (Mporas and Ganchev, 2009; Schuller et al., 2013; Sell et al., 2010), age  (Krauss et al., 2002), gender, and personal traits. For example, Mairesse et al. (Mairesse et al., 2007) proposed classification, regression and ranking models to learn the Big Five personality traits of a speaker. Previous studies on the voice input privacy have been focused on two main aspects which are the voice-enabled systems breakthroughs and revealing the user’s privacy by analyzing their communications. By spoofing voice-based authentication systems, the attackers will have unauthorized access to the private information of these systems users (Wu et al., 2015). Alepis and Patsakis in (Alepis and Patsakis, 2017) presented and analyzed the potential risks of voice assistants in mobile devices, showing how urgent it is to develop privacy-preserving architectures for speech analysis by extracting the distinguishable features from the speech without compromising individual privacy.

Edge Computing and Privacy-preserving Deep Learning One of the primary roles of the edge computing is to filter the data locally prior send it to the cloud which may be used to protect users privacy. In (Osia et al., 2017)

a hybrid framework for privacy-preserving analytics is presented by splitting a deep neural network into a feature extractor module on user side, and a classifier module on cloud side. It protects the user privacy by removing the undesired sensitive information from the extracted features results. Generative adversarial networks (GANs) 

(Goodfellow et al., 2014)

are one of the deep learning models that have been recently applied to filter the sensitive information from the raw data and regenerate the filtered data. For example, on-device transformation of sensor data was proposed by Malekzadeh et al. in 

(Malekzadeh et al., 2019). They use convolutional auto-encoders (CAE) as a sensor data anonymizer to remove user identifiable features locally and then share the filtered sensor data with specific applications such as daily activities monitoring apps.

Privacy-preserving Voice Analysis on the Edge Voice conversion is one of privacy preservation approaches that has been used in the context of speaker identity. For example, VoiceMask is proposed to mitigates the security and privacy risks of the voice input on mobile devices by concealing voiceprints and adding differential privacy (Qian et al., 2018). It sanitized the audio signal received from the microphone by hiding the speaker’s identity and then sending the perturbed speech to the voice input apps or the cloud. Nautsch et al. in (Nautsch et al., 2019) investigate the gap in the development of privacy-preserving technologies to protect privacy in the case of speech signals and show the essential need to apply these technologies to protect speaker and speech characterisation in speech recordings.

3. System Framework

Figure 2. The emotion filter is trained on the cloud, and then the pre-trained filter is used on the edge side for speaking style transformation

The 1907 Franklin Model D roadster.

Leveraging non-parallel voice conversion (VC) technology (Kaneko et al., 2019), our framework aims to protect the users’ privacy from voice analysis service. The framework consists of three main components: (i) pre-processor, (ii) emotion filter, and (iii) generator. The description of the proposed framework is presented in Figure 1.

Pre-processor The raw voice input is pre-processed to extract the distinguishing signal representation by performing transformations functions to the voice input, and using the resulting outcomes as labels  (Doersch and Zisserman, 2017). Prosody features such as spectral envelope (SPs) are the most effective features in emotion recognition tasks (Trigeorgis et al., 2016) which can be computed directly from the signal by applying specific transformation functions to minimize the computational overhead and extract these specific features. WORLD vocoder  (Morise et al., 2016) is used to extract these specific features at frame-level (a frame denotes a number of samples with the same time-stamp, one per channel) from both the source and corresponding target signals.

Emotion Filter To learn the sensitive representations in the voice input, CycleGAN-based speaking style conversion is used to transform the raw voice emotional features to corresponding normal one. CycleGAN (Zhu et al., 2017) is a custom model of GANs that uses two generators and two discriminators. By considering X and Y as different domains that generators task to convert from X to Y and vice versa. Generator (G) maps from domain X to Y, and generator (F) maps from Y to X. In addition, two adversarial discriminators D (X) and D (Y), where D (X) aims to distinguish between objects in X domain and output objects from F (Y), and D (Y) aims to discriminate between (Y) and the output of G (X). It has introduced to overcome the difficulty of preparing paired dataset in style conversion applications  (Zhu et al., 2017). Therefore, changing emotion style by using CycleGAN will help to transfer between emotions and neutral speaking style without paired training data.

Consistent with TinyML (Tin, [n. d.])

objectives to trade-off between the machine learning accuracy and resource efficiency and optimize the performance cost of data analysis on constrained platforms such as IoT devices, we propose to implement the emotion filter on the edge to enable on-device privacy-preserving speech analysis. A pre-trained CycleGAN model is frozen by combining the model graph structure with its weight to create an embedded version that can fit on the edge devices for features transformation task.

Generator The WORLD synthesis algorithm is used to re-generate a high-quality synthesized voice input by using the output features from the emotion filtering model. The generated voices are able to preserve the content of the voice input and project away sensitive representations such as emotional patterns.

In this way, the sensitive patterns in the voice input will not disclose to cloud-based voice input services providers and they will have only access to the synthesized voice. The output of the proposed framework will protect the speaker privacy by preserving the linguistic content and hiding the private non-linguistics content (emotional patterns).

4. Experimental Evaluation

Speech Recognition Speaker Recognition
Word Error Rate (WER %) Equal Error Rate (EER %)
Raw Voice 5.27 0.06
NVIDIA Quadro P1000 20.36 0.120
Intel Core i7 20.66 0.121
ARM Cortex A-72 20.67 0.124
Table 1. A comparison of the accuracy between raw and generated voice across speech and speaker recognition tasks.
Figure 3. The emotion recognition accuracy of the raw and generated voices: similar performance accuracy by NVIDIA Quadro P1000, Intel Core i7, and ARM Cortex-A72 processor in hiding sensitive emotional patterns.

The 1907 Franklin Model D roadster.

Figure 4. Spectrogram analysis of the raw (happy) and generated waveform (neutral) in term of (top)amplitude (the size of the oscillations of the vocal folds), (middle)intensity (acoustic intensity), and (bottom)fundamental frequency (vocal fold vibration property)

The 1907 Franklin Model D roadster.

The sensitive information to be hidden is the speaker’s emotion. As our framework aims to decrease the emotion recognition accuracy while maintaining the speech and speaker recognition accuracy, the following subsections describe the experimental setting, selected speech analysis tasks for evaluation, and evolution results.

4.1. Experimental Setting

We conduct the experiment by running the proposed framework over NVIDIA Quadro P1000 with GP107 graphics processor and 4 GB memory, MacBook Pro with 2.7 GHz Intel Core i7 processor and 8 GB memory and Raspberry Pi4 with ARM Cortex-A72 and 4 GB memory. The experiment is executed over speech audio-only files in .wav format from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) (Livingstone and Russo, 2018) with 48kHz/16-bit sampling rate. It contains recordings from 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent with seven speech emotions: 0 = neutral, 1 = calm, 2 = happy, 3 = sad, 4 = angry, 5 = fearful, 6 = disgust, 7 = surprised. A subset of this dataset is used to evaluate the effectiveness of the proposed framework. We select 118 files where 96(training) and 22(testing) files with three emotions: neutral, happy, and angry. It organised as follows (24 * 2) emotion files and (24 * 2) neutral files. To overcome the model over-fitting, different texts have been choosing for training and testing sets.

As depicted in Figure 2, the training phase begins at the cloud end (NVIDIA Quadro P1000 or MacBook Pro) by downsampling the .wav files to 16 kHz to preserve the signal content information. Then, acoustic parameters spectral envelopes (SEs) for each logarithmic fundamental frequency (log F0) are extracted as prosody features which are the most related feature to emotion recognition. These features are mapped from utterances spoken in an emotional style to corresponding features of neutral utterances using CycleGAN with similar network architecture in  (Kaneko et al., 2019). The emotion filter is trained under 7500 iterations with learning rates of 0.0002 for the generator and 0.0001 for the discriminator. On the edge side (Raspberry Pi 4), the pre-trained emotion filter is exported to apply on-device emotion filtering. Then, the raw voice signal is pre-processed to extract the prosody features. The emotionless speaking style is achieved by using the pretrained filter to convert these features in the raw signal. Finally, the outputs of the conversion phase are converted to neutral speech waveforms by using the WORLD synthesizer.

4.2. Speech Analysis Tasks

We conducted objective evaluations of the generated voice for three speech analysis tasks. Firstly, the re-generated .wav file is evaluated by the trained emotion recognition model on RAVDESS dataset to identify the emotion state of the sanitized voices. Then, we perform speech and speaker recognition on the sanitized voices and evaluate the accuracy. The tools used in the three tasks evaluation is described as follows.

Speech Recognition The IBM Watson speech-to-text service with speech recognition capabilities is used to convert the generated speech into text  (IBM, 2019). The performance of the speech recognition on the sanitized voices is measured by the word error rate (WER), which is a common metric of the speech recognition performance to measures the difference in the word level between two spoken sequences.

Speaker Recognition To ensure that the proposed system is highly confident that person is speaking and has been correctly identified, a VGG speaker recognition model on VoxCeleb2 (Xie et al., 2019) has been used. All audio files are converted to 16-bit streams at a 16 kHz for consistency. The accuracy of speaker recognition is measured by the equal error rate (EER), which is the rate at which both acceptance and rejection errors are equal.

Emotion Recognition To automatically identify the emotional state of the users, an emotion classification model based on RAVDESS dataset has been used to predict 7 emotion classes which are the following: 0 = neutral, 1 = calm, 2 = happy, 3 = sad, 4 = angry, 5 = fearful, 6 = disgust, 7 = surprised  (Marcogdepinto, 2019). The accuracy of emotion recognition is defined as the success rate of correctly identified emotions.

4.3. Evaluation and Discussion

From the experiments results, we can summaries the following:

Results accuracy and privacy We compare the accuracy results from the speech and speaker recognition models on raw and transformed voices, and it shows that the utility of the signal retains accepted, while the emotion recognition accuracy is sharply decreased, see Figure 3. The evaluation of the speech recognition performance is done using the average of WER which is 20.36 %, 20.66 %, and 20.67 % in NVIDIA Quadro P1000, Intel Core i7, and ARM Cortex A-72 respectively. In addition, the speaker recognition performance is measured by EER and the average of the error rate is 0.12 in all three platforms, as shown in Table 1. As a result, the proposed framework has insignificant difference in the performance accuracy across edge and cloud-based resources. However, the speech recognition accuracy will be improved by increasing the dataset, refining the features set, and manipulate the model architecture.

Model optimization on the edge With relatively cheap ARM Cortex-A72 board device, we show that the proposed framework can be implemented with similar performance accuracy as on NVIDIA Quadro P1000 and Intel Core i7. Spectrogram analysis of the raw and transformed speech is illustrated in Figure 4, demonstrates that there are similar changes on the amplitude, intensity, and fundamental frequency of the transformed speech using cloud and edge resources which lead to alike accuracy performance over speech analysis tasks. However, optimizing the model will be considered by implementing different approaches such as weight pruning, compression, and quantization to enhance the model performance on the edge.

Resource limitation and scalability Computational performance is limited by various resources constraint such as memory capacity. Figure 5 compares the execution time and memory usage of the emotion conversion model running on the NVIDIA Quadro P1000 and Intel Core i7 versus the ARM Cortex-A72. However, NVIDIA Quadro P1000 and Intel Core i7 outperforms ARM Cortex-A72 in the execution time. Precisely, ARM Cortex-A72 implementation consumes twice as much time as that of the Intel Core i7. We managed to significantly reduce the execution time and memory usage of running the proposed framework on edge devices.

Privacy overhead on the edge We performed a privacy overhead analysis for the emotion filter on the edge devices. We use two type of speech analysis experiments: one integrated with the emotion filter and a baseline experiment without emotion filter, as described in Table 2. In the first experiment, we disable the emotion filter and measure the overhead purely incurred by configuring the edge device (raspberry pi), loading the .wav file, and uploading the file to the cloud. The second experiment type measures the overhead purely incurred by running the speech analysis with emotion filter. Precisely, we measure the overhead by: (1) configuring the edge device, (2) loading .wav file, (3) pre-processing, filtering, and generating the .wav file, and (4) uploading the file to the cloud. Figure 6 shows the experiment results. Raspberry pi needs about 40 second for booting up. The average power consumption is 0.45 Watt and average energy consumption is 31.2 Joule. The baseline latency is about 20 second, while the emotion filter latency is 41 second.

5. Discussion and Future Work

Figure 5. A comparison between the execution time and memory usage of running the model in NVIDIA Quadro P1000, Intel Core i7, and ARM Cortex-A72

The 1907 Franklin Model D roadster.

In this paper we presented a framework for privacy preserving speech analytics which consists of a pre-processor, an emotion filter, and a generator. It will protect the user privacy by hiding the undesired sensitive information from the extracted features by transforming the features correspond to emotional pattern while retaining the features correspond to speech content and speaker identity unchanged. Therefore, on the cloud side, only non-sensitive filtered features can be inferred such as linguistics content. Evaluating our framework by distribution the training and testing execution between the edge and the cloud, we achieved high decrease in the emotion recognition task accuracy by

91%, while slightly decreasing for other tasks such as speech and speaker recognition. Protection the users’ privacy in speech analysis is a very challenging task. The challenge is how to sanitize the speech without decreasing the speech recognition accuracy. We will focus on extending the proposed framework by including speech content filter to prevent similar outcomes using other techniques, such as sentiment analysis to strengthen the user privacy. In addition, we will include speech analysis in-the-wild emotional dataset and further investigations of for privacy-preserving deep learning architecture.

Time Baseline Emotionless Filter
T0 Pi on Pi on
T1 Load .wav Load .wav
T2 Cloud uploading Preprocessor (PP), emotion filter (EF), generator (G)
T3 Cloud uploading
Table 2. Privacy Overhead Analysis Experiments
Figure 6. Power and Energy Consumption

The 1907 Franklin Model D roadster.


  • (1)
  • Aff ([n. d.]) [n. d.]. Emotion AI. https://www.affectiva.com/emotion-ai-overview/
  • Tin ([n. d.]) [n. d.]. TinyML. https://sites.google.com/site/rankmap/
  • Alepis and Patsakis (2017) Efthimios Alepis and Constantinos Patsakis. 2017. Monkey says, monkey does: security and privacy on voice assistants. (2017).
  • Doersch and Zisserman (2017) Carl Doersch and Andrew Zisserman. 2017. Multi-task self-supervised visual learning.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets.
  • IBM (2019) IBM. 2019. IBM Watson Speech to Text. https://speech-to-text-demo.ng.bluemix.net
  • Jin and Wang (2018) Huafeng Jin and Shuo Wang. 2018. Voice-based determination of physical and emotional characteristics of users.
  • Kaneko et al. (2019) Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, and Nobukatsu Hojo. 2019. CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion.
  • Krauss et al. (2002) Robert M Krauss, Robin Freyberg, and Ezequiel Morsella. 2002. Inferring speakers’ physical attributes from their voices. (2002).
  • Livingstone and Russo (2018) Steven R Livingstone and Frank A Russo. 2018. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. (2018).
  • Mairesse et al. (2007) François Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. (2007).
  • Malekzadeh et al. (2019) Mohammad Malekzadeh, Richard G. Clegg, Andrea Cavallaro, and Hamed Haddadi. 2019. Mobile Sensor Data Anonymization.
  • Marcogdepinto (2019) Marcogdepinto. 2019. marcogdepinto/Emotion-Classification-Ravdess. https://github.com/marcogdepinto/Emotion-Classification-Ravdess
  • Morise et al. (2016) Masanori Morise, Fumiya Yokomori, and Kenji Ozawa. 2016. WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. (2016).
  • Mporas and Ganchev (2009) Iosif Mporas and Todor Ganchev. 2009. Estimation of unknown speaker’s height from speech. (2009).
  • Nautsch et al. (2019) Andreas Nautsch, Abelino Jiménez, Amos Treiber, Jascha Kolberg, Catherine Jasserand, Els Kindt, Héctor Delgado, Massimiliano Todisco, Mohamed Amine Hmani, Aymen Mtibaa, et al. 2019. Preserving Privacy in Speaker and Speech Characterisation. (2019).
  • Osia et al. (2017) Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Kleomenis Katevas, Sina Sajadmanesh, Hamid R Rabiee, Nicholas D Lane, and Hamed Haddadi. 2017. A hybrid deep learning architecture for privacy-preserving mobile analytics. (2017).
  • Peppet (2014) Scott R Peppet. 2014. Regulating the internet of things: first steps toward managing discrimination, privacy, security and consent. (2014).
  • Qian et al. (2018) Jianwei Qian, Haohua Du, Jiahui Hou, Linlin Chen, Taeho Jung, and Xiang-Yang Li. 2018. Hidebehind: Enjoy Voice Input with Voiceprint Unclonability and Anonymity.
  • Schuller et al. (2013) Björn Schuller, Stefan Steidl, Anton Batliner, Alessandro Vinciarelli, Klaus Scherer, Fabien Ringeval, Mohamed Chetouani, Felix Weninger, Florian Eyben, Erik Marchi, et al. 2013. The INTERSPEECH 2013 computational paralinguistics challenge: Social signals, conflict, emotion, autism.
  • Sell et al. (2010) Aaron Sell, Gregory A Bryant, Leda Cosmides, John Tooby, Daniel Sznycer, Christopher Von Rueden, Andre Krauss, and Michael Gurven. 2010. Adaptations in humans for assessing physical strength from the voice. (2010).
  • Trigeorgis et al. (2016) George Trigeorgis, Fabien Ringeval, Raymond Brueckner, Erik Marchi, Mihalis A Nicolaou, Björn Schuller, and Stefanos Zafeiriou. 2016. Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network.
  • Wu et al. (2015) Zhizheng Wu, Nicholas Evans, Tomi Kinnunen, Junichi Yamagishi, Federico Alegre, and Haizhou Li. 2015. Spoofing and countermeasures for speaker verification: A survey. (2015).
  • Xie et al. (2019) Weidi Xie, Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. 2019. Utterance-level Aggregation For Speaker Recognition In The Wild. (2019).
  • Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017.

    Unpaired image-to-image translation using cycle-consistent adversarial networks.