Electro-Magnetic Side-Channel Attack Through Learned Denoising and Classification

10/16/2019
by   Florian Lemarchand, et al.
0

This paper proposes an upgraded electro-magnetic side-channel attack that automatically reconstructs the intercepted data. A novel system is introduced, running in parallel with leakage signal interception and catching compromising data in real-time. Based on deep learning and character recognition the proposed system retrieves more than 57 signals regardless of signal type: analog or digital. The approach is also extended to a protection system that triggers an alarm if the system is compromised, demonstrating a success rate over 95 radio and graphics processing unit architectures, this solution can be easily deployed onto existing information systems where information shall be kept secret.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

07/04/2021

Real-time Detection and Adaptive Mitigation of Power-based Side-Channel Leakage in SoC

Power-based side-channel is a serious security threat to the System on C...
09/15/2021

Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel

Neural network applications have become popular in both enterprise and p...
05/14/2019

Incremental Adaptive Attack Synthesis

Information leakage is a significant problem in modern software systems....
01/26/2021

Denoising Single Voxel Magnetic Resonance Spectroscopy with Deep Learning on Repeatedly Sampled In Vivo Data

Objective: Magnetic Resonance Spectroscopy (MRS) is a noninvasive tool t...
07/17/2019

Deep learning scheme for microwave photonic analog broadband signal recovery

In regular microwave photonic (MWP) processing paradigms, broadband sign...
02/07/2018

MAGNETO: Covert Channel between Air-Gapped Systems and Nearby Smartphones via CPU-Generated Magnetic Fields

In this paper, we show that attackers can leak data from isolated, air-g...
08/09/2019

No Need of Data Pre-processing: A General Framework for Radio-Based Device-Free Context Awareness

Device-free context awareness is important to many applications. There a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

All electronic devices produce em emanations that not only interfere with radio devices but also compromise the data handled by the information system. A third party may perform a side-channel analysis and recover the original information, hence compromising the system privacy. While pioneering work of the domain focused on analog signals [21], recent studies extend the eavesdropping exploit using an em side-channel attack to digital signals and embedded circuits [10]. The attacker’s profile is also taking on a new dimension with the increased performance of sdr. With recent advances in radio equipment, an attacker can leverage on advanced signal processing to further stretch the limits of the side-channel attack using em emanations [3]. The fast evolution of deep neural networks, an attacker can extract patterns or even the full structured content of the intercepted data with a high degree of confidence and a limited execution time.

In this paper, a learning-based method is proposed with the specialization of Mask R-CNN [6]

as a denoiser and classifier. A complete system is demonstrated, embedding sdr and deep-learning, that detects and recovers leaked information at a distance of several tens of meters. It provides an automated solution where the data is interpreted directly. The solution is compared to other system setups.

The paper is organized as follows. Section 2 presents existing methods to recover information from em emanations. Section 3 describes the proposed method for automatic character retrieval. Experimental results and detailed performances are exposed in Section 4. Section 5 concludes the paper.

2 Related Work

This research focuses on two areas: em side channel attacks on information systems and learning-based techniques that can recover information from noisy environments.

Van Eck et al. [21] published the first technical reports revealing how involuntary emissions originating from electronics devices can be exploited to compromise data. While the original work of the domain targeted crt screens and analog signals, Kuhn et al. [10] propose to use side-channel attacks to extract confidential data from lcd, targeting digital data. Subsequently, other types of systems have been attacked. Vuagnoux et al. [23] extend the principle of em side-channel attack to capture data from keyboards and, in their recent work, Hayashi et al. present interception methods based on sdr targeting laptops, tablets[4] and smartphones [5]. The use of sdr increases the surface of attack from military organizations to hackers. It also opens up new post-processing opportunities that improve attack characteristics. De Meulemeester et al. [2]

leverage on sdr to enhance the performance of the attack and automatically find the structure of the captured data. When the intercepted emanation is originally 2D, retrieving the synchronization parameters of the targeted information system, the captured em signal can be transformed from a vector to an image, reconstructing the 2-dimensional sensitive visual information. That step is called the

rastering.

Figure 1: Experimental setup: the attacked system includes an eavesdropped screen (1) displaying sensitive information. It is connected to an information system (2). An interception chain including an sdr receiver (3) sends samples to a host computer (4) that implements signal processing including a deep learning denoiser and cr.

When retrieving visual information from an em signal, an important part of the original information is lost through the leakage and interception process. This loss leads to a drop of the snr and a deterioration of spatial coherence into the reconstructed samples in the case of image data. Hence, denoising methods are needed. Image denoising by signal processing techniques has been extensively studied since it is an important step in many computer vision applications. BM3D 

[1], proposed by Dabov et al., is a the state-of-the-art methods for awgn removal using non-learned processing. BM3D uses thresholding and Wiener filtering into the transform domain. It is used in the experiments of Section 4.

Deep learning algorithms have recently stood out from the crowd for solving many signal processing problems. These trained models have an extreme ability to fit complex problems. Recent gpu architectures have been optimized to support deep learning workloads and have fostered ever deeper networks, mining structured information from data and providing results where classical algorithms fail. The spread of deep learning has occurred in the domain of image denoising and several models initially developed for other applications have been turned into denoisers. dncnn [25]

is a cnn designed to blindly remove awgn, without prior knowledge on noise level. Others techniques such as denoising autoencoders 

[22, 18]

are able to denoise images without restriction on the type of noise. Autoencoders algorithms learn to map their input to a latent space (encoding) and project back the latent representation to the input space (decoding). Autoencoders learn a denoising model by minimizing a loss function which evaluates the difference between the autoencoder output and the reference. Advanced methods, such as Noise2Noise 

[11], infer denoising strategies without any clean input reference data. Noise2Noise algorithm learns a representation of the noise by looking only at noisy samples.

Learning-based models perform well in various denoising but with strong hypothesis regarding the distribution of the noise to be withdrawn. awgn assumption is often used. In the considered problem, certain components of the noise are non-randomly distributed and have a spatial coherence (between pixels). Additionally, information is damaged (partially lost and spread over several pixels) by the interception/rastering process. None of the previously exposed methods is tailored for such noise and distortion natures, calling for a novel experimental setup.

Conventional approaches exist to protect devices from eavesdropping. Such approaches appear under different code names such as TEMPEST [16] or emsec and consist of shielding devices [10] to nullify the emanations, or using fonts that minimize the em emanations [9]. However, these approaches are either costly solutions or technically hard to use in practice especially when it comes to ensure the data privacy throughout the life-cycle of a complex information system. The next section details the proposed method to enhance the em side-channel attack.

3 Proposed Side-Channel Attack

3.1 System Description

Figure 1 shows the proposed end-to-end solution. The method automatically reconstructs leaked visual information from compromising emanations. The setup is composed of two main elements. At first the antenna and sdr processing capture in the rf domain the leaked information originating from the displayed video. Then, the demodulated signal is processed by the host computer, recovering a noisy version of the original image [10] leaving room for advanced image processing techniques. On top of proposing an end-to-end solution from the capture to the data itself, the method uses a learning-based approach. It exploits the capturing compromising signals and recognized automatically the leaked data. A first step based on a Mask R-CNN (Mask R-CNN) architecture embeds the following: denoising, segmentation, character detection/localization, and character recognition. A second step post-processes the Mask R-CNN output. A Hough transform is done for text line detection and a Bitap algorithm [15] is applied to approximate match information. Thi setup detects several forms of compromising emanations (analog or digital) and automatically triggers an alarm if critical information is leaking. Next sections detail how the method is trained and integrated.

3.2 Training Dataset Construction

A substantial effort has been made on building a process that semi-automatically generates and labels datasets for supervised training. Each sample image is made up of a uniform background on which varied characters are printed. Using that process, an open data corpus of 123.610 labeled samples, specific to the problem at hand, has been created to further be used as training, validation and test datasets. This dataset is available online 111https://github.com/opendenoising/interception_dataset to train denoiser architectures in difficult conditions.

The proposed setup, to be trained, denoises the intercepted sample images and extracts their content, i.e. the detected characters and their positions. The input space that should be covered by the training dataset is large and three main types of interception variability can be observed. Firstly, interception induces an important loss of the information originally existing in the intercepted data. The noise level is directly linked to the distance between the antenna and the target. Several noise levels are generated by adding rf attenuation after the antenna. That loss itself causes inconsistencies in the rasterizing stage. Secondly, em emanations can come from different sources, using different technologies, implying in turn different intercepted samples for the same reference image. The dataset covers vga, dp-to-dvi and hdmi cables and connectors. Besides this unwanted variability, a synthetic third type of variability is introduced to solve the character retrieval. Many different characters are introduced in the corpus to be displayed on the attacked screen. They range from 11 to 70 points in size and they are both digits and letters, and letters are both upper and lower cases. Varied fonts, character colors and background colors, as well as varied character positions in the sample are used. Considering these different sources of variability, the dataset is built trying to get an equi-representation of the different interception conditions.

Figure 2: A reference sample is displayed on the target screen (top-left). The interception module outputs uncalibrated samples. Vertical and horizontal porchs (red) helps alignment and porch withdrawal (top-right). Samples are rescaled and split into patches to obtain the same layout than the reference set.

The choice has been made to display on the target screen a sample containing patches of size pixels (top-left image of Figure 2). For building the dataset, having multiple patches speeds the process up because smaller samples can be derived from a single screen interception and more variability can be introduced in the dataset. The main challenge when creating the dataset lies in the samples acquisition itself. Indeed, once intercepted, the samples are not directly usable. The interception process outputs samples such as the one of Figure 2 (middle-top) where intercepted characters are not aligned (temporally and spatially) with respective reference samples. An automated method is introduced that uses the porches, artificially colored in red in Figure 2 (middle-top), to align spatially samples. Porches are detected using brute-force search of large horizontal and vertical gradients (to find vertical and horizontal porches, respectively). A validation step ensures the temporal alignment, based on the insertion of a QRCode in the upper-left patch. If the QRCode is similar between the reference and the intercepted image, the image patches are introduced in the dataset.

Data augmentation [14]

is used to enhance the dataset coverage area. It is done onto patches to add variability into the dataset and reinforce its learning capacity. Conventional methods are applied to raw samples to linearly transform them (Gaussian and median blur, salt and pepper noise, color inversion and contrast normalization).

3.3 Implemented Solution to Catch Compromising Data

In order to automate the interception of compromising data, the Mask R-CNN has been turned into a denoiser and classifier. The implementation is based on the one proposed by W. Abdulla 222  https://github.com/matterport/Mask_RCNN. Other learning-based and classical signal processing methods, discussed in Section 4.2, are also implemented to assess the quality of the proposed framework. Mask R-CNN is a framework adapted from the previous Faster R-CNN [17]. The network consists of two stages. The first stage, also known as backbone network, is a ResNet101 convolutional network [7] extracting features out of the input samples. Based on the extracted features, a rpn proposes roi. roi are regions in the sample where information deserves greater attention. The second stage, called head network, classifies the content and returns bounding box coordinates for each of the roi. The main difference between Faster R-CNN and Mask R-CNN lies in an additional fcn branch [19] running in parallel with the classification and extracting a binary mask for each roi to provide a more accurate localization of the object of interest.

Mask R-CNN is not originally designed to be used for denoising but rather for instance segmentation. However, it fits well the targeted problem. Indeed, the problem is similar to a segmentation where signal has to be separated from noise. As a consequence, when properly feeding a trained Mask R-CNN network with noisy samples containing characters, one obtains lists of labels (i.e. characters recognition), as well as their bounding boxes (characters localization) and binary masks representing the content of the original clean sample. The setup of the classification branch allows to be language-independent and to add classes other than characters.

Figure 3: The output of Mask R-CNN may be used in two ways. The segmentation can be drawn (left) and further processed by an ocr, or the Mask R-CNN classifier can directly infer the sample content (right) and propose some display and confidence information.

Two strategies can be employed to exploit Mask R-CNN components for the problem. The first idea is to draw the output masks of Mask R-CNN segmentation (Figure 3 left-hand side) and request an ocr to retrieve characters from the masks. A second possibility is to make use of the classification faculty of Mask R-CNN (Figure 3 right-hand side) and obtain a list of labels without using an ocr engine. The second method using the classifier of Mask R-CNN proves to be better in practice, as shown in Section 4.2.

The training strategy is to initialize the training process using pre-trained weights [13] for the MS COCO [12] dataset. First, the weights of the backbone are frozen and the head is trained to adapt to the application. Then, the weights of the backbone are relaxed and both backbone and head are trained together until convergence. This process is done to ensure the convergence and speed up training.

Figure 4: Three samples (left, middle, right) displayed at different stages of the interception/denoising pipeline. From top to bottom: the reference patch displayed on the screen; the patch after rasterization (raw patch); the patches denoised with bm3d, autoencoder and Mask R-CNN.

4 Experimental Results

4.1 Experimental Setup

The experimental setup is defined as follows: the eavesdropped display is 10 meters away from the interception antenna. A rf attenuator is inserted after the antenna. It ranges from 0 dB to 24 dB to generate a wide range noise values and simulated higher interception radius as shown in the examples 3 . Compromising emanations are issued either by a vga display, a dp-to-dvi cable or an hdmi connector. The interception system is depicted in Figure 1: the antenna is bilog, the sdr device automatically recovering parameters [2] is an Ettus X310 receiving with a 100 MHz bandwidth to recover the compromised information with a fine granularity [10]. The host computer running post-processing has a linux operating system, an Intel®Xeon®W-2125 cpu and an Nvidia GTX 1080 Ti gpu. The host computer rasters the compromising data using the cpu while the proposed learning-based denoiser/classifier runs also on the gpu.

4.2 Performance Comparison Between Data Catchers

The purpose of the exposed method is to analyze compromising emanations. Once a signal is detected and rasterized, intercepted emanations should be classified into compromising or not. Figure 4 illustrates the outputs of different implemented denoisers. More examples are available at 333https://github.com/opendenoising/extension. It is proposed to assess the data leak according to the ability of a model to retrieve original information. A ratio between the number of characters that a method correctly classifies from an intercepted sample, and the true number of characters in the corresponding clean reference is used as a metric.

The quality assessment method is the following. First, a sample containing a large number of characters is pseudo-randomly generated (similar to dataset construction). The sample is displayed on the eavesdropped screen and em emanations are intercepted. The proposed denoising/retrieval is applied and the obtained results are compared to the reference sample. The method using Mask R-CNN produces directly a list of retrieved characters. Other methods, implemented to compare the efficiency of the proposal, use denoising in combination with the Tesseract [20] ocr. Tesseract is a well performing ocr engine, retrieving characters from images. It produces a list of characters retrieved from a denoised sample. As the output of Tesseract is of the same type as the output of Mask R-CNN classification, metrics can be extracted to fairly compare methods.

An end-to-end evaluation is used measuring the quality of characters classification. A F-score classically used to evaluate classification model is computed using and . is the number of true positives divided by the number of all positives. is the number of true positives divided by the number of relevant samples, the set of relevant samples being the union of true positives and false negatives. For simplification and not use an alignment process, a true positive is chosen here to be the recognition of a character truly existing in the reference sample.

Denoiser ocr F-Score precision recall
Raw Tesseract 0.04 0.20 0.02
BM3D 0.13 0.22 0.09
Noise2Noise 0.17 0.25 0.12
AutoEncoder 0.24 0.55 0.15
RaGAN 0.24 0.42 0.18
UNet 0.35 0.62 0.25
Mask R-CNN 0.55 0.82 0.42
Mask R-CNN Mask R-CNN 0.68 0.81 0.57
Table 1: Character recognition performance for several data catchers using either denoising and Tesseract, or Mask R-CNN (Mask R-CNN) classification. Mask R-CNN classifier outperforms others methods with a F-score on the test set.

Table 1 presents the results of different data catchers on a test set of 12563 patches. All denoising methods are tested using Tesseract, and compared to Mask R-CNN classification used as ocr. Tesseract is first applied to raw (non-denoised) samples as a point of reference. bm3d is the only classical denoising solution tested. Noise2Noise, AutoEncoder, RaGAN and UNet are different deep learning networks configured as denoisers. As shown in Table 1

, Mask R-CNN classification outperforms all other methods. The version of Mask R-CNN using its own classifier is better than the Tesseract ocr engine applied on Mask R-CNN segmentation mask output. It is also interesting to look at precision and recall scores that compose the

F-score. Both Mask R-CNN methods perform better than other methods for the two indices. Precision is almost the same for both methods, meaning that they both present the same ratio of good decision. The difference lies in the recall score. The recall score of the version using Tesseract is lower than the score of the method using its own classifier, indicating that the latter version miss less characters. The main advantage of the Mask R-CNN is that the processing tasks to solve the final aim of textual information recovery are jointly optimized.

Another key performance indicator of learning-based algorithms is inference time (Table 2). The proposed implementation using Mask R-CNN infers results from an input sample of resolution in s in average. This inference time, although lower than bm3d latency, is admittedly higher than other neural networks and hardly real-time. Nevertheless, the inference time of Mask R-CNN includes all the denoising/ocr process and provides a largely better retrieval score. In the context of a continuous listening of em emanations, it provides an acceptable trade-off between processing time and interception performance. The optimization of the inference time could be considered as a future work with the recent advances in accelerating neural network inference [24, 8].

Denoiser ocr Inference Timing (s)
Raw Tesseract 0.19
BM3D 21.8
Autoencoder 1.15
Mask R-CNN 4.22
Mask R-CNN Mask R-CNN 4.04
Table 2: Inference time for several data catchers using Tesseract or Mask R-CNN classification as ocr. Input resolution is and it is processed using a split in patches. Mask R-CNN classifier is slower than the autoencoder but still faster than BM3D.

5 Conclusions

Handling data while ensuring trust and privacy is challenging for information system designers. This paper presents how the attack surface can be enlarged with the introduction of deep learning in an em side-channel attack. The proposed method uses Mask R-CNN as denoiser and it automatically recovers more than of a leaked information for a wide range of interception distances. The proposal is software-based, and runs on the host computer of an off-the-shelf sdr platform.

References

  • [1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007-08) Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing 16 (8), pp. 2080–2095 (en). External Links: ISSN 1057-7149, 1941-0042, Link, Document Cited by: §2.
  • [2] P. De Meulemeester, L. Bontemps, B. Scheers, and G. A. E. Vandenbosch (2018) Synchronization retrieval and image reconstruction of a video display unit exploiting its compromising emanations. In 2018 International Conference on Military Communications and Information Systems (ICMCIS), Warsaw, pp. 1–7 (en). External Links: ISBN 978-1-5386-4559-8, Document Cited by: §2, §4.1.
  • [3] D. Genkin, M. Pattani, R. Schuster, and E. Tromer (2018) Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels. arXiv:1809.02629 (en). Cited by: §1.
  • [4] Y. Hayashi, N. Homma, M. Miura, T. Aoki, and H. Sone (2014) A Threat for Tablet PCs in Public Space: Remote Visualization of Screen Images Using EM Emanation. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security - CCS ’14, Scottsdale, Arizona, USA, pp. 954–965 (en). External Links: ISBN 978-1-4503-2957-6, Link, Document Cited by: §2.
  • [5] Y. Hayashi, N. Homma, Y. Toriumi, K. Takaya, and T. Aoki (2017) Remote Visualization of Screen Images Using a Pseudo-Antenna That Blends Into the Mobile Environment. IEEE Transactions on Electromagnetic Compatibility 59 (1), pp. 24–33 (en). External Links: ISSN 0018-9375, 1558-187X, Document Cited by: §2.
  • [6] K. He, G. Gkioxari, P. Dollar, and R. Girshick (2017) Mask R-CNN. In 2017 IEEE International Conference on Computer Vision (ICCV), Venice, pp. 2980–2988 (en). External Links: ISBN 978-1-5386-1032-9, Link, Document Cited by: §1.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep Residual Learning for Image Recognition. In

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Las Vegas, NV, USA, pp. 770–778 (en). External Links: ISBN 978-1-4673-8851-1, Link, Document Cited by: §3.3.
  • [8] Y. He, J. Lin, Z. Liu, H. Wang, L-J. Li, and S. Han (2018) AMC: AutoML for Model Compression and Acceleration on Mobile Devices. In Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), Vol. 11211, pp. 815–832 (en). External Links: ISBN 978-3-030-01233-5 978-3-030-01234-2, Link, Document Cited by: §4.2.
  • [9] M. G. Kuhn and R. J. Anderson (1998) Soft Tempest: Hidden Data Transmission Using Electromagnetic Emanations. In Information Hiding, D. Aucsmith (Ed.), Vol. 1525, pp. 124–142 (en). External Links: ISBN 978-3-540-65386-8 978-3-540-49380-8, Link, Document Cited by: §2.
  • [10] M. G. Kuhn (2013) Compromising Emanations of LCD TV Sets. IEEE Transactions on Electromagnetic Compatibility 55 (3), pp. 564–570 (en). External Links: ISSN 0018-9375, 1558-187X, Document Cited by: §1, §2, §2, §3.1, §4.1.
  • [11] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila (2018) Noise2Noise: Learning Image Restoration without Clean Data. CoRR (en). External Links: Link Cited by: §2.
  • [12] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft COCO: Common Objects in Context. In Computer Vision – ECCV 2014, Lecture Notes in Computer Science, pp. 740–755 (en). External Links: ISBN 978-3-319-10602-1 Cited by: §3.3.
  • [13] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. Van Der Maaten (2018) Exploring the Limits of Weakly Supervised Pretraining. In Computer Vision – ECCV 2018, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), Vol. 11206, pp. 185–201 (en). External Links: ISBN 978-3-030-01215-1 978-3-030-01216-8, Link, Document Cited by: §3.3.
  • [14] A. Mikolajczyk and M. Grochowski (2018) Data augmentation for improving deep learning in image classification problem. In 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujście, pp. 117–122 (en). External Links: ISBN 978-1-5386-6143-7, Link, Document Cited by: §3.2.
  • [15] G. Myers (1999) A Fast Bit-vector Algorithm for Approximate String Matching Based on Dynamic Programming. J. ACM 46 (3), pp. 395–415. External Links: ISSN 0004-5411, Link, Document Cited by: §3.1.
  • [16] National Security Agency (1982) NACSIM 5000 TEMPEST FUNDAMENTALS. Cited by: §2.
  • [17] S. Ren, K. He, R. Girshick, and J. Sun (2017) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6), pp. 1137–1149 (en). External Links: ISSN 0162-8828, 2160-9292, Link, Document Cited by: §3.3.
  • [18] O. Ronneberger, P. Fischer, and T. Brox (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, pp. 234–241 (en). External Links: ISBN 978-3-319-24574-4 Cited by: §2.
  • [19] E. Shelhamer, J. Long, and T. Darrell (2017-04) Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (4), pp. 640–651 (en). External Links: ISSN 0162-8828, 2160-9292, Link, Document Cited by: §3.3.
  • [20] R. Smith (2007-09) An Overview of the Tesseract OCR Engine. In Ninth International Conference on Document Analysis and Recognition (ICDAR 2007) Vol 2, Curitiba, Parana, Brazil, pp. 629–633 (en). External Links: ISBN 978-0-7695-2822-9, Document Cited by: §4.2.
  • [21] W. Van Eck (1985) Electromagnetic radiation from video display units: An eavesdropping risk?. Computers & Security 4 (4), pp. 269–286 (en). External Links: ISSN 01674048, Document Cited by: §1, §2.
  • [22] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol (2010) Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion.

    Journal of Machine Learning Research

    11, pp. 3371–3408 (en).
    Cited by: §2.
  • [23] M. Vuagnoux and S. Pasini (2009) Compromising Electromagnetic Emanations of Wired and Wireless Keyboards. Proceedings of the 18th USENIX Security Symposium, pp. 1–16 (fr). External Links: Link Cited by: §2.
  • [24] C. Zhang, Z. Fang, P. Zhou, P. Pan, and J. Cong (2016) Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks. In Proceedings of the 35th International Conference on Computer-Aided Design - ICCAD ’16, Austin, Texas, pp. 1–8 (en). External Links: ISBN 978-1-4503-4466-1, Link, Document Cited by: §4.2.
  • [25] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017-07) Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155 (en). External Links: ISSN 1057-7149, 1941-0042, Link, Document Cited by: §2.