CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning

In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market. In this paper, we show how an attacker can use deep learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. We implement the attack using a 3D conditional GAN and show how the framework (CT-GAN) can be automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results and can be executed in milliseconds. To evaluate the attack, we focus on injecting and removing lung cancer from CT scans. We show how three expert radiologists and a state-of-the-art deep learning AI could not differentiate between tampered and non-tampered scans. We also evaluate state-of-the-art countermeasures and propose our own. Finally, we discuss the possible attack vectors on modern radiology networks and demonstrate one of the attack vectors on an active CT scanner.



There are no comments yet.



Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results

In this work we present a novel system for PET estimation using CT scans...

CT-Realistic Lung Nodule Simulation from 3D Conditional Generative Adversarial Networks for Robust Lung Segmentation

Data availability plays a critical role for the performance of deep lear...

Extracting Lungs from CT Images using Fully Convolutional Networks

Analysis of cancer and other pathological diseases, like the interstitia...

Kidney Recognition in CT Using YOLOv3

Organ localization can be challenging considering the heterogeneity of m...

Cross-Modality Synthesis from CT to PET using FCN and GAN Networks for Improved Automated Lesion Detection

In this work we present a novel system for generation of virtual PET ima...

Fully Convolutional Deep Network Architectures for Automatic Short Glass Fiber Semantic Segmentation from CT scans

We present the first attempt to perform short glass fiber semantic segme...

Controlling for Biasing Signals in Images for Prognostic Models: Survival Predictions for Lung Cancer with Deep Learning

Deep learning has shown remarkable results for image analysis and is exp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Medical imaging is the non-invasive process of producing internal visuals of a body for the purpose of medical examination, analysis, and treatment. In some cases, volumetric (3D) scans are required to diagnose certain conditions. The two most common techniques for producing detailed 3D medical imagery are Magnetic Resonance Imaging (MRI), and CT (Computed Tomography). Both MRI and CT scanner are essential tools in the medical domain. In 2016, there were approximately 38 million MRI scans and 79 million CT scans performed in the United States [1].111245 CT scans and 118 MRI scans per 1,000 inhabitants.

MRI and CT scanners are similar in that they both create 3D images by taking many 2D scans of the body over the axial plane (from front to back) along the body. The difference between the two is that MRIs use powerful magnetic fields and CTs use X-Rays. As a result, the two modalities capture body tissues differently: MRIs are used to diagnose issues with bone, joint, ligament, cartilage, and herniated discs. CTs are used to diagnose cancer, heart disease, appendicitis, musculoskeletal disorders, trauma, and infectious diseases [2].

Today, CT and MRI scanners are managed though a picture archiving and communication system (PACS). A PACS is essentially an Ethernet-based network involving a central server which (1) receives scans from connected imaging devices, (2) stores the scans in a database for later retrieval, and (3) retrieves the scans for radiologists to analyze and annotate. The digital medical scans are saved through the system using the standardized DICOM format.222https://www.dicomstandard.org/about/

1.1 The Vulnerability

The security of health-care systems has been lagging behind modern standards [3, 4, 5]. This is partially because health-care security policies mostly address data privacy (access-control) but not data security (vulnerabilities) [6]

. Some PACS are intentionally or accidentally exposed to the Internet via web access solutions. Some example products include Centricity PACS (GE Healthcare), IntelliSpace (Philips), Synapse Mobility (FujiFilm), and PowerServer (RamSoft). A quick search on Shodan.io reveals 1,849 medical image (DICOM) servers and 842 PACS servers exposed to the Internet. Recently, a researcher at McAfee demonstrated how these web portals can be exploited to view and modify a patient’s 3D DICOM imagery

[7]. PACS which are not directly connected to the Internet are in directly connected via the facility’s internal network [8]. They are also vulnerable to social engineering attacks, physical access, and insiders [9].

Therefore, a motivated attacker will likely be able to access a target PACS and the medical imagery within it. Later in section 4 we will discuss the attack vectors in greater detail.

1.2 The Threat

An attacker with access to medical imagery can alter the contents to cause a misdiagnosis. Concretely, the attacker can add or remove evidence of some medical condition. Fig. 1 illustrates this process where an attacker injects/removes lung cancer from a scan. Since 3D medical scans provide strong evidence of medical conditions, an attacker with access to a scan would have the power to add or remove evidence of medical conditions. For example, aneurysms, heart disease, blood clots, infections, arthritis, cartilage problems, torn ligaments or tendons, tumors in the brain, heart, or spine, and other cancers.

There are many reasons why an attacker would want to alter medical imagery. Consider the following scenario: an individual or state adversary wants to affect the outcome of an election. To do so, the attacker adds cancer to a CT scan performed on a political candidate (the appointment/referral can be pre-existing or setup via social engineering). After learning of the cancer, the candidate steps-down from his or her position. The same scenario can be applied to existing leadership.

Figure 1: By tampering with the medical imagery between the investigation and diagnosis stages, both the radiologist and the reporting physician believe the fallacy set by the attacker.

Another scenario to consider is that of ransomware: An attacker seeks out monetary gain by holding the integrity of the medical imagery hostage. The attacker achieves this by altering a few scans and then by demanding payment for revealing which scans have been affected.

Furthermore, consider the case of insurance fraud: Somebody alters his own medical records in order to receive money from his insurance company. In this case, there is no risk of physical injury, and the payout can be every large.

There are many more reasons why an attacker would want to tamper with the imagery. For example: falsifying research evidence, sabotaging another company’s research, job theft, terrorism, assassination, and even murder. Table 1 summarizes attacker’s motivations, goals, and effects by injecting or removing lung cancer from a CT scan.

In this paper we will focus on the injection and removal of lung cancer from CT scans. The reason we investigate this attack is because lung cancer is common and has the highest mortality rate [10]. Therefore, due its impact, an attacker is likely to manipulate lung cancer to achieve his or her goal. We note that the threat, attack, and countermeasures proposed in this paper also apply to MRIs and all of medical conditions listed above.

Table 1: Summary of an Attacker’s Motivation and Goals for Injecting or Removing Lung Cancer

1.3 The Attack

With the help of machine learning, the domain of image generation has advanced significantly over the last ten years [survey]. In 2014, there was a breakthrough in the domain when Goodfellow et al.


introduced a special kind of deep neural network called a generative adversarial network (GAN). GANs consist of two neural networks which work against each other: the

generator and the discriminator. The generator creates fake samples with the aim of fooling the discriminator, and the discriminator learns to differentiate between real and fake samples. When applied to images, the result of this game helps the generator create fake imagery which are photo realistic. While GANs have been used for positive tasks, researchers have also shown how they can be used for malicious tasks such as malware obfuscation [12, 13] and misinformation (e.g., deepfakes [14]).

In this paper, we show how an attacker can realistically inject and remove lung cancer from 3D CT scans. The framework, called CT-GAN, uses two GANs called a conditional adversarial networks (cGAN) to perform in-painting (image completion)

[15] on 3D imagery. For cancer injection, we trained a cGAN on cancerous lung nodules so that the generator will always complete the images with cancer. Conversely, for cancer removal, we trained another cGAN on benign lung nodules only.

To make the process efficient and the output anatomically realistic, we perform the following steps: (1) locate where the cancer should be inject/removed, (2) cut out a rectangular cuboid from the location, (3) interpolate (scale) the cuboid, (4) modify the cuboid with the GAN, (5) rescale, and (6) paste it back into the original scan. By dealing with a small portion of the scan, the problem complexity is reduced by focusing the GAN on the relevant area of the body (as opposed to the entire CT). Moreover, the algorithm complexity is reduced by processing fewer inputs

333A 3D CT scan can have over 157 million pixels where the latest advances in GANs can only handle about 2 million pixels (HD images).

(pixels) and concepts (anatomical features). This results in fast execution and high anatomical realism. The interpolation step is necessary because the scan’s scale and resolution settings are adjusted for each patient. To compensate for the resulting interpolation blur, we mask the relevant content according to water density in the tissue (Hounsfield units) and hide the smoothness by adding Gaussian white noise. In order to assist the GAN in generating realistic features, histogram equalization is performed on the input samples. We found that this transformation helps the 3D convolutional neural networks in the GAN learn how to generate the subtle features found in the human body. Furthermore, the entire process can be automated by having the algorithm chose a candidate injection/removal location using an existing nodule detection algorithms. This means that the attack can be deployed in air gapped PACS.

To verify the threat of this attack, we hired three radiologists, who specialize in lung cancer, to diagnose a mix of 50 tampered and 50 authentic CT scans. The radiologists diagnosed the injected cancers and missed the removed cancers. In addition to the radiologists, we also show how the proposed technique is an effective adversarial machine learning attack. We accomplish this by evaluating a state-of-the-art deep learning-based lung cancer screening algorithm on the same dataset. We found that the cancer screening tools, used by some radiologists, are also vulnerable to this attack.

1.4 The Contribution

To the best of our knowledge, it has not been shown how an attacker can maliciously alter the content of a medical image in a realistic and automated way. Therefore, this is the first comprehensive research which exposes, demonstrates, and verifies the threat of an attacker manipulating 3D medical imagery. In summary, the contributions of this paper are as follows:

The Attack Model

We are the first to present how an attacker can infiltrate a PACS network and then use malware to automatically tamper with 3D medical imagery. We also provide a systematic overview of the attack: the vulnerabilities, attack vectors, motivations, and attack goals. Finally, we demonstrate one possible attack vector by covertly connecting a MitM device (Raspberry Pi) to an actual CT scanner.

Attack Implimentation

We are the first to demonstrate how GANs, with the proper preprocessing, can be used to effeicenlty and realistically inject/remove lung cancer into/from large 3D CT scans. We also evaluate how well the algorithm can deceive both humans and machines: expert radiologists and state-of-the-art deep learning based solutions. We also show this implementation may be used by an attacker since it can be automated (in the case of an air gapped system) and fast (in the case of an infected DICOM viewer).


We enumerate various countermeasures which can be used to mitigate the threat. We also evaluate state-of-the-art tamper detection and propose a novel method for detecting the attack. We also provide the reader with best practices and configurations which can be implemented immediately to help prevent this attack.

For reproducibility and further investigation, we have published our models, datasets, and source code online.444[redacted for final print]

The remainder of the paper is organized as follows: In section 3 we review related works and contrast then our work. In section 4 we present the attack model and demonstrate one of the attack vectors. In section 5, we present the GAN architecture used and the attack process. We also provide samples of the network’s 3D generated images. In section 6 we evaluate

2 Background: GANs

The most basic GAN consists of two neural networks: the generator () and discriminator (). The objective of the GAN is to generate new images which are visually similar to real images in a sample data distribution (i.e., a set of images). The input to is the random noise vector drawn from the prior distribution

(e.g., a Gaussian distribution). The output of

, denoted

, is an image which is expected to have visual similarity with those in

. Let the non-linear function learned by parametrized by be denoted as . The input to is either a real image or a generated image . The output of

is the probability that input was real or fake. Let the non-linear function learned by

parametrized by be denoted as . The top of Fig. 2 illustrates the configuration of a classic GAN.

Training a GAN follows multiple iterations of the following is performed:

[breakable,title=Training Procedure for GANs] Repeat for training iterations:

  1. [leftmargin=*]

  2. Train .

    1. [leftmargin=*]

    2. Pull a random batch of samples , forward propagate the samples through , compute the error given the label , and back propagate the error through to update (using gradient descent or some variant).

    3. Pull a random batch of samples, forward propagate the samples through , compute the error given the label , and back propagate the error through to update .

  3. Train . Using a random batch of vectors :

    1. [leftmargin=*]

    2. Forward propagate through and , compute the error at the output of given the label ,back propagate the error through to without updating , and continue the back propagation through while updating .

It can be seen that and are playing a zero sum game where is trying to find better (more realistic) samples to fool , while is learning to catch every fake sample generated by . At the end of the training, is discarded, and is used to generate new samples.

A cGAN is a GAN which has an its generator and discriminator conditioned on an additional input (e.g., class labels). This input acts as an extension to the latent space and can help generate and discriminate images better. In [15]

, the authors propose an image-to-image translation framework using cGANs (a.k.a. pix2pix). There the authors showed how deep convolutional cGANs can be used to translate images from one domain to another. For example, converting casual photos to a Van Gogh paintings.

One application of the pix2pix framework is in-painting; the process of completing a missing part of an image. When using pix2pix to perform in-painting, the generator tries to fill in a missing part of an image based on the surrounding context, and its past experience (other images seen during training). Meanwhile, the discriminator tries to differentiate between completed images and original images, given the surrounding context. The bottom of Fig. 2 illustrates the a cGAN used in this paper for in-painting.

One application of the pix2pix framework is in-painting; the process of completing a missing part of an image. In this case, the input to is a copy of where missing regions in the image are replaced with zeros. We denote this masked input as . The output of is the completed image, visually similar to those in . The input to is either the concatenation or . The bottom of Fig. 2 illustrates this cGAN as used for in-painting. The process for training this kind of GAN is as follows:

Figure 2: A schematic view of a classic GAN (top) and a cGAN setup for in-painting.

[breakable,title=Training Procedure for cGAN In-painting] Repeat for training iterations:

  1. [leftmargin=*]

  2. Pull a random batch of samples , and mask the samples with zeros to produce the respective .

  3. Train .

    1. [leftmargin=*]

    2. Forward propagate through , compute the error given the label , and back propagate the error through to update .

    3. Forward propagate through , compute the error given the label , and back propagate the error through to update .

  4. Train .:

    1. [leftmargin=*]

    2. Forward propagate through and then through , compute the error at the output of given the label ,back propagate the error through to without updating , and continue the back propagation through while updating .

Although pix2pix does not use a latent random input , it avoids deterministic outputs by performing random dropouts in the generator during training. this forces the network to learn multiple representations of the data.

We note that there is a GAN called a CycleGAN [16] that can directly translate images between two domains (e.g., benign malign). However, we found that the CycleGAN was unable to inject realistic cancer into 3D samples. Therefore, we opted to use the pix2pix model with in-painting because it produced much better results.

3 Related Work

The concept of tampering with imagery, and the use of GANs on medical imagery, is not new. In this section we briefly review these subjects and compare it to our work.

3.1 Tampering Medical Images

Many works have proposed methods for detecting forgeries in medical images [17], but none have focused on the attack itself. The most common methods of image forgery are: copying content from one image to another (image splicing), duplicating content within the same image to cover up or add something (copy-move), and enhancing an image to give it a different feel (image retouching) [18].

Copy-move attacks can be used to remove cancer or duplicate an existing cancer. However, duplicating an existing cancer will raise suspicion because radiologists closely analyze each sample. Image-splicing can be used to inject cancer into healthy lungs. However, CT scanners have distinct local noise patterns which are visually noticeable [19, 20]. The copied patterns would not fit the local pattern and thus raise suspicion.

Both copy-move and image-splicing techniques are performed using 2D image editing software such as Photoshop. These tools require a digital artist to alter the contents of the scan. Furthermore, even with a digital artist, it is hard to accurately inject and remove cancer realistically. This is because human bodies are complex and diverse. For example, cancers are usually attached to nearby anatomy (lung walls, bronchi, etc.) which may be hard to alter accurately under the scrutiny of expert radiologists. This is especially true considering in the case of 3D imagery such as CT scans. Furthermore, an attacker will likely need to automate the entire process in a malware since (1) many PACS are not directly connected to the Internet and (2) the diagnosis may occur immediately after the scan is performed.

In contrast to the photoshopping approach, our method (1) works on 3D medical imagery which provides more definitive evidence than a 2D scans, (2) realistically alters the contents of a 3D scan while considering nearby anatomy, and (3) is completely autonomous.

3.2 GANs in Medical Imagery

Since 2016, over 100 papers relating to GANs and medical imaging have been published [21]. These publications mostly relate image reconstruction, denoising, image generation (synthesis), segmentation, detection, classification, and registration. We will focus on the use of GANs to generate medical images.

Due to privacy laws, it is hard to acquire medical scans for training models and students. As a result, the main focus of GANs in this domain has been towards augmenting (increasing) datasets. One approach is to convert imagery from one modality to another. For example, in [22] the authors used cGANs to convert 2D slices of CT images to Positron Emission Tomography (PET) images. In [23, 24] the authors demonstrated a similar concept using a fully convolutional network with using a cGAN architecture. In [25], the authors converted MRI images to CT images using domain adaptation. In [26], the authors converted MRI to CT images and vice versa using a CycleGAN.

Another apporach to augmenting medical datasets is the generation of new instances. In [27], the authors use a DCGAN to generate 2D brain MRI images with a resolution of 220x172. In [28], the authors used a DCGAN to generate 2D liver lesions with a resolution of 64x64. In [29], the authors generated 3D blood vessels using a Wasserstien (WGAN) [30]. In [31], the authors use a Laplace GAN (LAPGAN) to generate skin lesion images with 256x256 resolution. In [32], the authors train two DCGANs for generating 2D chest X-rays (one for malign and the other for benign). However, the generated samples were downsampled to 128x128 in resolution since this approach could not be scaled to the original resolution of 2000x3000. In [33] the authors generated 2D images of pulmonary lung nodules (lung cancer) with 56x56 resolution. The author’s motivation was to create realistic datasets for doctors to practice on. The samples were generated using a deep convolutional GAN (DCGAN) and their realism were assessed with help of two radiologists. The authors found that the radiologists were unable to differentiate between real and fake samples.

These works contrast to our work in the following ways:

  1. We are the first to introduce the use of GANs as a way to tamper with 3D imagery. The other works focus on synthesizing cancer samples for boosting classifiers experiments and training students, but not for malicious attacks. We also provide an overview of how the attack can be accomplished in a modern medical system.

  2. All of the above works either generate small regions of a scan without the context of a surrounding body, or generate full 2D scan in very low resolution. Samples which are generated without a context cannot be realistically ‘pasted’ back into an arbitrary medical scan. We generate/remove regions realistically within an existing bodies. Moreover, very low resolution images of full scans cannot replace existing ones without raising suspicion (especially if the body doesn’t match the actual person). Our approach can modify full resolution 3D scans555The CT scans we used had a standard full resolution: 512x512x700. (which can be generalized to 2D as well).

  3. In the context of a full 3D scan, we are the first to evaluate how well a GAN can fool expert radiologists and state-of-the-art AI in lung cancer screening. Moreover, the radiologists and AI were able to consider how well the cancer was attached to the surrounding anatomy.

4 The Attack Model

In this section we explore the attack surface by first presenting the network topology and then by discussing possible the vulnerabilities and attack vectors. We also demonstrate one of the attack vectors on an actual CT scanner.

Figure 3: A network overview a PACS in a hospital. 1-3: points where an attacker can tamper with all scans. 4-5: points where an attacker can tamper with a subset of scans.

4.1 Network Topology

In order to discuss the attack vectors we must first present the PACS network toplogy. Fig. 3 presents the network configuration of PACS found in hospitals. The topology is based on PACS literature [34, 35, 36, 8], PACS enterprise solutions (e.g., Carestream), and our own surveys conducted on local hospitals. We note that, private medical clinics may much simpler topologies and are sometimes directly connected to the Internet [7].

The basic elements are as follows:

PACS Server.

The heart of the PACS system. It is responsible for storing, organizing, and retrieving DICOM imagery and reports. PACS servers commonly use SQL for querying the stored DICOM files. Although the majority of facilities use local systems, a few hospitals have transitioned to cloud storage [37].

RIS Server.

The radiology information system (RIS) is responsible for managing medical imagery and associated data. It primary use is for tracking radiology imaging orders and the reports of the radiologist. Doctors in the hospital’s internal network interface with the RIS to order scans and receive the resulting reports and DICOM scans. We note that not all hospitals employ RIS, but nearly all hospitals use a PACS storage server [38].

Modality Workstation.

A PC (typically Windows) which is used to control an imagine modality such as a CT scanner. During an appointment, the attending technician configures and captures the imagery via the work station. The workstation sends all imagery in DICOM format to the PACS server for storage.

Radiologist Workstation.

A radiologist can retrieve and view contents from the PACS from various locations. The most common location is a viewing workstation within the department. Other locations include the radiologist’s personal PC (local or remote via VPN), and sometimes on a mobile device (via the Internet or within the local network).

Web Server.

An optional feature which enables radiologists to view of DICOM scans (in the PACS) over the Internet. The content may be viewed though a web browser (e.g., medDream and Orthanc [39]), an app on a mobile device (e.g., FujiFilm’s Synapse Mobility), or access via an API (e.g., Dicoogle [40]).

Secretary’s PC.

This workstation has both Internet access (e.g., for emails) and access to the PACS network. Access to the PACS is enabled so that the secretary can maintain the devices’ schedules: When a patient arrives at the imaging modality, for safety reasons, the technician confirms the patient’s identity with the details sent to the modality’s workstation (entered by the secretary). This ensures that the scans are not accidentally mixed up between the patients.

Other Connected Elements.

Other departments within the hospital usually have access to the PACS network. For example, Oncology, Cardiology, Pathology, and OR/Surgery. In these cases, various workstations around the hospital can load DICOM files from the server given the right credentials. Furthermore, it is common for a hospital to deploy Wi-Fi access points, which are connected to the internal network, for employee access.

4.2 Attack Scenario

The attack scenario is as follows: An attacker wants to achieve one of the goals listed in Table 1. In order to cause the target effect, the attacker will alter the contents of the target CT scan(s) before the radiologist performs his or her diagnosis. The attacker will achieve this by either targeting the data at rest or in motion.

Data at Rest.

This refers to the DICOM files stored on the PACS Server, or on the radiologist’s personal computer (saved for later viewing). In some cases, DICOM files are stored on DVDs and transfered to the hospital by the patient or an external doctor. Although the DVD may be swapped by the attacker, it is more likely the interaction will be virtual.

Data in Motion.

This refers to DICOM files being transfered across the network, or loaded into volatile memory by an application.

Once the data has been altered, the radiologist will perform the diagnosis and unwittingly write a false report. Finally, the false report will ultimately cause a physical, mental, or monetary effect on the patient, thus achieving the attacker’s goal.

We note that this scenario does not apply to the case where the goal is to falsify or sabotage research. Moreover, for insurance fraud, an attacker will have a much easier time targeting a small medical clinic. For simplicity, we will assume the target PACS is in a hospital.

4.3 Target Assets

To capture/modify a medical scan, an attacker must compromise at least one of the assets numbered in Fig. 3. By compromising on of the assets (1-4), the attacker gains access to every scan. By compromising (5) or (6), the attacker only gains access a subset of scans. The RIS (3) can give the attacker full control over the PACS server (2), but only if the attacker can obtain the right credentials or exploit the RIS software. The network wiring between the modalities and the PACS server (4) can be used to install a man-in-the-middle device. This device can to modify data in motion if it is not encrypted (or if the protocol flawed).

In all cases, it is most likely that the attacker will infect the target asset with a custom malware, outlined in Fig. 4. The tampered image can be validated using an open source and state-of-the-art lung cancer screening algorithm (see the 2017 Kaggle competition [41]). However, these algorithms can take up to a few minutes to process an entire scan. Therefore, the validation step is only applicable in cases where the attack is performed on data at rest (the PACS server) or off-site. In our evaluation, we found it was not necessary to perform this validation step to fool expert radiologists.

Figure 4: The tampering process of an autonomous malware.

4.4 Attack Vectors

There are many ways in which an attacker can reach the assets marked in Fig. 3. In general, the attack vectors involve either remote or local infiltration of the facility’s network.

Remote Infiltration. If the attacker is located in the Internet, the attacker may be able to exploit vulneabilites in Internet-facing devices in the PACS network and gain direct access (e.g., [7]). Another option is to perform social engineering attacks. For example, a spear phishing attack on the department’s secretary to infect his/her workstation with a backdoor, or on the scanner’s technician to have him install fraudulent updates.

If the PACS is not connected to the Internet, then the attacker may try to gain access to the hospital’s internal network and then perform lateral movement to the PACS. The internal network is typically connected to the Internet (evident from the recent wave of cyber attacks on medical facilities [42, 43, 44, 45]). Moreover, the PACS is usually connected to the internal network in order for doctors to view scans/reports, and for the department’s secretaries to manage patient referrals [8]. Another way for the attacker to gain access to the internal network is by compromising remote sites linked to the hospital, such as a partnered hospital or clinic. The attacker can also try to infect a doctor’s laptop or phone with a malware which will open a back door for the attacker.

If the attacker knows that radiologist analyzes scans on his personal computer, then the attacker can infect the radiologist’s device or DICOM viewer with the malware.

Local Infiltration. The attacker can gain physical access to the premises with a false pretext, such as being a technician from Philips who needs to run a diagnostic on the CT scanner. The attacker may also hire an insider or even be an insider. A recent report shows that of cyber attacks on the healthcare industry come from internal threats [9].

Once inside, the attacker can plant the malware or a back door by connecting a device to exposed network infrastructure (ports, wires, …) [46] or by accessing unlocked workstations. Without entering any restricted areas, the attacker can gain access to the internal network by hacking Wi-Fi access points using existing vulnerabilities such as ’Krack’ [47] or the more recent ‘BleedingBit’ vulnerabilities which have affected many hospitals [48].

Compromising the PACS. Once access to the PACS has been is achieved, there are numerous ways an attacker can compromise a target asset. Aside from exploiting misconfigurations or default credentials, the attacker can install the malware by exploiting software vulnerabilities. For example, some PACS servers disclose private information/credentials, which can be exploited remotely to create of admin accounts, and have hard-coded credentials.666CVE-2017-14008 and CVE-2018-17906 A quick search on exploit-db.com reveals seven implemented exploits for PACS servers in 2018 alone.

Recently, modality workstations have been found to have significant vulnerabilities [49]. For example, in 2018 the US Department of Homeland Security exposed ‘low skill’ vulnerabilities in Philips’ Brilliance CT scanners [50]. For example, improper authentication, OS command injection, and hard-coded credentials.777CVE-2018-8853, CVE-2018-8857, and CVE-2018-8861 Other recent vulnerabilities include hard-coded credentials.888CVE-2017-9656

Given the state of health-care security, and that systems such as CT scanners are rarely given software updates [51], it is likely that many more vulnerabilities exist. Once the target asset in the PACS has been compromised, the attacker will be able to install the malware and manipulate the scans of the target patient(s).

4.5 Example Proof of Concept

[redacted for final print]

Figure 5: The network architecture, layers, and parameters used for both the injection () and removal () networks.
Figure 6: predicting (cancer) from the condition

after 150 epochs of training. Only the middle slice is shown.

5 The Attack

In this section, we will present then technique and model which an attacker can use to manipulate cancer in the obtained CT scans. First we present the CT-GAN architecture and how to train it. Then, we will describe the entire tampering process and present some sample results.

It is important to note that there are many types of lung cancer. A common type of cancer forms a round mass of tissue called a solitary pulmonary nodule. Most nodules with a diameter less than 8mm are benign. However, nodules which are larger may indicate a malign cancerous growth. Moreover, if numerous nodules having a diameter mm are found, then the patient has an increased risk of primary cancer [52]. For this attack, we will focus on injecting and removing multiple solitary pulmonary nodules.

5.1 CT-GAN: The Neural Architecture

A single slice in a CT scan has a resolution of 512x512 pixels. Each pixel in a slice measures radiodensity at that location in Hounsfield units (HU). The CT scan of a human’s lungs can have over 157 million voxels999A voxel is the three dimensional equivalent of a pixel. (512x512x600). In order to train a GAN on an image of this size, we first locate a candidate location (voxel) and then cut out a small region around it (cuboid) for processing. The selected region is slightly larger than needed in order to provide the cGAN with context of the surrounding anatomy. This enables the cGAN to generate/remove lung cancers which connect to the body in a realistic manner.

To accurately capture the concepts of injection and removal, we use a framework consisting of two GANs: one for injecting cancer () and one for removing cancer (). To inject a large pulmonary nodule into a CT scan, we train a deep 3D convolutional cGAN to perform in-painting on samples which are voxels in dimension. For the completion mask, we zero-out a cube in the center of the sample. Finally, we train the GAN on cancer samples which have a diameter of least 10mm. As a result, the trained generator will always complete (inject) cuboids taken from CT scans with similar nodules. A separate cGAN is trained for cancer removal using the same architecture, but with samples containing benign lung nodules only (having a diameter mm).

The model architecture (layers and configurations) used for both and is illustrated in Fig. 5. Overall, and had million and million trainable parameters respectively ( million in total).

We note that follow up CT scans are usually ordered when a large nodule is found. This is because nodule growth is a strong indicator of cancer [52]. We were able to simulate this growth by conditioning each cancerous training sample on the nodule’s diameter. However, our objective is to show how GANS can produce realistic cancer. Therefore, the sake of simplicity, we have omitted this ‘feature’ from the above model.

Figure 7: Top: the complete cancer injection/removal process. Bottom: sample images from the injection process. The grey numbers indicate from which step the image was taken. The sample 2D images are the middle slice of the respective 3D cuboid.

5.2 Training CT-GAN

To train the GANs, we used a free dataset of 888 CT scans collected in the LIDC-IDRI lung cancer screening trial [53]. The dataset came with annotations from radiologists: the location and diameter of pulmonary nodules with diameters mm. In total there were 1186 nodules listed in the annotations.

To create the training set for , we extracted from the CT scans all nodules with a diameter between 10mm and 16mm (169 in total). To increase the number of training samples, we performed data augmentation: For each of the 169 cuboid samples, we (1) flipped the cuboid on the , , and planes, (2) shifted the cuboid by 4 pixels in each direction on the xy plane, and (3) rotated the cuboid 360 degrees at 6 degree intervals. This produced an additional 66 instances for each sample. The final training set had 11,323 training samples.

To create the training set for , we first selected clean CT scans which had no nodules detected by the radiologists. On these scans we used the nodule detection algorithm from [54] (also provided in the dataset’s annotations) to find benign micro nodules. Of the detected micro nodules, we selected 867 nodules at random and performed the same data augmentation as above. The final training set had 58,089 samples.

Prior to training the GANs, all of the samples were preprocessed with scaling, equalization, and normalization (described in the next section in detail). Both of the GANs were trained on their respective datasets for 200 epochs with a batch size of 50 samples. Each GAN took 26 hours to complete its training on an NVIDIA GeForce GTX TITAN X using all of the GPU’s memory. Fig. 6 shows how well was able to predict cancer patterns after 200 epochs.

5.3 Execution: The Tampering Process

In order to inject/remove lung cancer, pre/post-processing steps are required. The following describes the entire injection/removal process as illustrated in Fig. 7:

  1. [leftmargin=*]

  2. Capture Data. The CT scan is captured (as data at rest or in motion) in either raw or DICOM format using one of the attack vectors from section 4.

  3. Localize & Cut. A candidate location is selected where cancer will be injected/removed, and then the cuboid is cut out around it.

    • [noitemsep,wide=0pt, leftmargin=]

    • Injection: An injection location can be selected in one of two ways. The fastest way is to take one of the middle slices of the CT scan and select a random location near the middle of the left or right half (see Fig. 9 in the appendix). With CT scans, this strategy gave us a % successes rate. A more precise way is to execute an existing nodule detection algorithm to find a random micro nodule. To improve speed, the algorithm can be given only a few slices and implemented with early stopping. In our evaluation, we used the algorithm in [54], though many other options are available.

    • Removal: A removal location can be selected by using [54] and then selecting the largest nodule, or by using a pre-trained state-of-the-art deep learning model.101010Pre-trained models are available here:

  4. Scale. is scaled to the original 1:1:1 ratio using 3D spline interpolation111111In Python: scipy.ndimage.interpolation.zoom. The ratio information is available in the DICOM meta data with the tags (0x0028,0x0030) and (0x0018,0x0050). Scaling is necessary because each scan can be different and the GAN needs a consistent unit of reference to produce accurate results. To minimize the computations, the cuboid cut in step 2 is cut with the exact dimensions so that the result of the rescaling process produces a cube.

  5. Equalize & Normalize. Histogram equalization is applied to the cube increase contrast. This is a critical step since it enables the GAN to learn subtle features in the anatomy (see 8 in the appendix for a visual example). Normalization is then applied using the formula , where . This normalization ensures that all values fall on the range of which helps the GAN learn the features better. The output of the this process is .

  6. Mask. A cube in the center of is masked with zeros to form .

  7. Inject/Remove. is passed through the chosen discriminator ( or ) creating a new sample () with new 3D generated content.

  8. Reverse Preprocessing. is unnormalized, unequalized, and then rescaled with spline interpolation back to its original proportions –forming .

  9. Touch-up. The result of the interpolation usually blurs the imagery. In order to hide this artifact from the radiologists, we added Gaussian noise to the sample: we set and

    to the sampled standard deviation of the values in

    . To get a clean sample of the noise, we only measured voxels with values less than HU. Moreover, to copy the relevant content into the scan, we merged the original cuboid () with the generated one () using a sigmoid weighted average.

    Let be the weight function defined as


    where parameter is the HU threshold between wanted and unwanted tissue densities, and parameter controls the smoothness of the cut edges. The function returns a 0-1 normalized Gaussian kernel with the dimensions of . This decays the contribution of each voxel the further it is the cuboid’s center.

    With , we define the merging function as


    where is source () and is the destination (). We found setting and to work best. By applying the touch-ups, the final cuboid is produced.

  10. Paste. The cuboid is pasted back into the CT scan at the selected location.

  11. Repeat. If the attacker is removing cancer, then return to step 2 until no more nodules with a diameter mm are found. If the attacker is injecting cancer, then (optionally) return to step 2 until four injections have been performed. The reason for this is because the risk of a patient being diagnosed with cancer is statistically greater in the presence of exactly four solitary pulmonary nodules having a diameter mm [52].

  12. Return Data. The scan is converted back into the original format (e.g. DICOM) and returned back to the source.

The quality of the process can be viewed in figures LABEL:fig:various_examples, LABEL:fig:inj_slices, and LABEL:fig:3dexample. Fig. LABEL:fig:various_examples presents a variety of examples before and after tampering. Fig. LABEL:fig:inj_slices shows all of the slices before and after executing . Finally, Fig. LABEL:fig:3dexample provides a 3D visualization of a cancer being injected and removed.

6 Evaluation

In this section we present our evaluation on how well the CT-GAN attack can fool expert radiologists and state-of-the-art AI.

[redacted for final print]

7 Countermeasures

The tampering of DICOM medical files is a well-known concern. One solution is to enable encryption between the hosts in the PACS network. Even if administrators verify that proper encryption is enabled along every path in their PACS, the data is susceptible to tampering after decryption. For example, within the modality workstation, the radiologists’ workstation, or the PACS server itself. Fortunately, the DICOM standard has a field for applying a digital signature. Therefore, one should ensure that (1) all imaging devices / DICOM gateways are configures to sign files with proper signatures, and (2) that the end devices are correctly verifying the signatures.

Another method for testing the integrity of the images is to perform digital watermarking (DW). DW is the process of adding a hidden signal into the image such that tampering corrupts the signal and thus indicates a loss of integrity. For medical images, this subject has been researched in depth [17]. However, the vast majority of medical devices and products do not implement DW techniques. This may be due to the fact that they add noise to images which may harm the medical analysis.

[remainder of section redacted for final print]

8 Conclusion

In this paper we introduced the possibility of an attacker modifying 3D medical imagery using deep learning. We explained the motivations for this attack, discussed the attack vectors (demonstrating one of them), and presented an automated manipulation framework (CT-GAN). As a case study, we demonstrated how an attacker can use this approach to inject or remove lung cancer from full resolution 3D CT scans using free medical imagery from the Internet. We also evaluated the attack by determining how well CT-GAN can fool humans and machines: expert radiologists and state-of-the-art AI. We found that both were fooled by all fake cancers injected into the lungs, and that the radiologist missed 96% of the removed cancers.121212These results are based on current findings. The evaluation is on-going at the present time. Finally, we presented known and novel countermeasures and provided some best practices and configurations which can be implemented immediately to help prevent this attack.

In summary, this paper exposes a significant threat. It also demonstrates how we should be wary of closed world assumptions: even a human expert and an advanced AI can be fooled if they are expecting their observations to play by the rules. One possible direction is to educate our experts of the threats, and train our AI to question phenomenon which they cannot explain [55].


The source code and models used in this paper are available online at [redacted for final print].We have also uploaded the datasets (manipulated CT scans) evaluated by the radiologists and AI.


  • [1] Papanicolas I, Woskie LR, and Jha AK. Health care spending in the united states and other high-income countries. JAMA, 319(10):1024–1039, 2018.
  • [2] John R. Haaga. CT and MRI of the Whole Body. Number v. 1 in CT and MRI of the Whole Body. Mosby/Elsevier, 2008.
  • [3] Torsten George. Feeling the pulse of cyber security in healthcare, securityweek.com. https://www.securityweek.com/feeling-pulse-cyber-security-healthcare, 2018. (Accessed on 12/25/2018).
  • [4] Infosec Institute. Cybersecurity in the healthcare industry. https://resources.infosecinstitute.com/cybersecurity-in-the-healthcare-industry, 2016. (Accessed on 12/25/2018).
  • [5] Lynne Coventry and Dawn Branley. Cybersecurity in healthcare: A narrative review of trends, threats and ways forward. Maturitas, 113:48 – 52, 2018.
  • [6] Mohammad S Jalali and Jessica P Kaiser. Cybersecurity in hospitals: A systematic, organizational perspective. Journal of medical Internet research, 20(5), 2018.
  • [7] Christiaan Beek. Mcafee researchers find poor security exposes medical data to cybercriminals, mcafee blogs. https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/mcafee-researchers-find-poor-security-exposes-medical-data-to-cybercriminals/, 2018. (Accessed on 12/24/2018).
  • [8] H.K. Huang. PACS-Based Multimedia Imaging Informatics: Basic Principles and Applications. Wiley, 2019.
  • [9] Verizon. Protected health information data breach report. white paper, 2018.
  • [10] Freddie Bray, Jacques Ferlay, Isabelle Soerjomataram, Rebecca L Siegel, Lindsey A Torre, and Ahmedin Jemal.

    Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries.

    CA: a cancer journal for clinicians, 68(6):394–424, 2018.
  • [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [12] Weiwei Hu and Ying Tan. Generating adversarial malware examples for black-box attacks based on gan. arXiv preprint arXiv:1702.05983, 2017.
  • [13] Maria Rigaki and Sebastian Garcia. Bringing a gan to a knife-fight: Adapting malware communication to avoid detection. In 2018 IEEE Security and Privacy Workshops (SPW), pages 70–75. IEEE, 2018.
  • [14] Robert Chesney and Danielle Keats Citron. Deep fakes: A looming challenge for privacy, democracy, and national security. U of Texas Law, Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21, 2018.
  • [15] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint, 2017.
  • [16] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint, 2017.
  • [17] Amit Kumar Singh, Basant Kumar, Ghanshyam Singh, and Anand Mohan. Medical Image Watermarking Techniques: A Technical Survey and Potential Challenges, pages 13–41. Springer International Publishing, Cham, 2017.
  • [18] Somayeh Sadeghi, Sajjad Dadkhah, Hamid A. Jalab, Giuseppe Mazzola, and Diaa Uliyan. State of the art in passive digital image forgery detection: copy-move image forgery. Pattern Analysis and Applications, 21(2):291–306, May 2018.
  • [19] A. Kharboutly, W. Puech, G. Subsol, and D. Hoa. Ct-scanner identification based on sensor noise analysis. In 2014 5th European Workshop on Visual Information Processing (EUVIP), pages 1–5, Dec 2014.
  • [20] Yuping Duan, Dalel Bouslimi, Guanyu Yang, Huazhong Shu, and Gouenou Coatrieux. Computed tomography image origin identification based on original sensor pattern noise and 3d image reconstruction algorithm footprints. IEEE journal of biomedical and health informatics, 21(4):1039–1048, 2017.
  • [21] Xin Yi, Ekta Walia, and Paul Babyn. Generative adversarial network in medical imaging: A review. arXiv preprint arXiv:1809.07294, 2018.
  • [22] Lei Bi, Jinman Kim, Ashnil Kumar, Dagan Feng, and Michael Fulham. Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs). pages 43–51. Springer, Cham, 2017.
  • [23] Avi Ben-Cohen, Eyal Klang, Stephen P. Raskin, Michal Marianne Amitai, and Hayit Greenspan. Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results. pages 49–57. Springer, Cham, 2017.
  • [24] Avi Ben-Cohen, Eyal Klang, Stephen P. Raskin, Shelly Soffer, Simona Ben-Haim, Eli Konen, Michal Marianne Amitai, and Hayit Greenspan. Cross-Modality Synthesis from CT to PET using FCN and GAN Networks for Improved Automated Lesion Detection. 2 2018.
  • [25] Qi Dou, Cheng Ouyang, Cheng Chen, Hao Chen, and Pheng-Ann Heng. Unsupervised Cross-Modality Domain Adaptation of ConvNets for Biomedical Image Segmentations with Adversarial Loss. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence

    , pages 691–697, California, 7 2018. International Joint Conferences on Artificial Intelligence Organization.

  • [26] Cheng-Bin Jin, Hakil Kim, Wonmo Jung, Seongsu Joo, Ensik Park, Ahn Young Saem, In Ho Han, Jae Il Lee, and Xuenan Cui. Deep CT to MR Synthesis using Paired and Unpaired Data. 5 2018.
  • [27] Camilo Bermudez, Andrew J Plassard, Larry T Davis, Allen T Newton, Susan M Resnick, and Bennett A Landman. Learning implicit brain mri manifolds with deep learning. In Medical Imaging 2018: Image Processing, volume 10574, page 105741L. International Society for Optics and Photonics, 2018.
  • [28] Maayan Frid-Adar, Idit Diamant, Eyal Klang, Michal Amitai, Jacob Goldberger, and Hayit Greenspan. GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification. 3 2018.
  • [29] Jelmer M. Wolterink, Tim Leiner, and Ivana Isgum. Blood Vessel Geometry Synthesis using Generative Adversarial Networks. In 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, The Netherlands, 2018.
  • [30] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. 1 2017.
  • [31] Christoph Baur, Shadi Albarqouni, and Nassir Navab. Melanogans: High resolution skin lesion synthesis with gans. arXiv preprint arXiv:1804.04338, 2018.
  • [32] Ali Madani, Mehdi Moradi, Alexandros Karargyris, and Tanveer Syeda-Mahmood. Chest x-ray generation and data augmentation for cardiovascular abnormality classification. In Medical Imaging 2018: Image Processing, volume 10574, page 105741M. International Society for Optics and Photonics, 2018.
  • [33] Maria JM Chuquicusma, Sarfaraz Hussein, Jeremy Burt, and Ulas Bagci. How to fool radiologists with generative adversarial networks? a visual turing test for lung cancer diagnosis. In Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, pages 240–244. IEEE, IEEE, 4 2018.
  • [34] W. Hruby. Digital (R)Evolution in Radiology. Springer Vienna, 2013.
  • [35] A. Peck. Clark’s Essential PACS, RIS and Imaging Informatics. Clark’s Companion Essential Guides. CRC Press, 2017.
  • [36] C. Carter and B. Veale. Digital Radiography and PACS. Elsevier Health Sciences, 2018.
  • [37] Bill Siwicki. Cloud-based pacs system cuts imaging costs by half for rural hospital | healthcare it news. https://www.healthcareitnews.com/news/cloud-based-pacs-system-cuts-imaging-costs-half-rural-hospital. (Accessed on 01/02/2019).
  • [38] Jennifer Bresnick. Picture archive communication system use widespread in hospitals. https://healthitanalytics.com/news/picture-archive-communication-system-use-widespread-in-hospitals, 2016. (Accessed on 01/02/2019).
  • [39] Sébastien Jodogne, Claire Bernard, Magali Devillers, Eric Lenaerts, and Philippe Coucke. Orthanc-a lightweight, restful dicom server for healthcare and medical research. In Biomedical Imaging (ISBI), 2013 IEEE 10th International Symposium on, pages 190–193. IEEE, 2013.
  • [40] Carlos Costa, Carlos Ferreira, Luís Bastião, Luís Ribeiro, Augusto Silva, and José Luís Oliveira. Dicoogle-an open source peer-to-peer pacs. Journal of digital imaging, 24(5):848–856, 2011.
  • [41] Concept to Clinic. Open source ml/ai challenge. https://concepttoclinic.drivendata.org/algorithms, 2017. (Accessed on 01/03/2019).
  • [42] Ladi Adefala. Healthcare experiences twice the number of cyber attacks as other industries. https://www.fortinet.com/blog/business-and-technology/healthcare-experiences-twice-the-number-of-cyber-attacks-as-othe.html, 2018. (Accessed on 12/24/2018).
  • [43] Joram Borenstein Rebecca Weintraub. 11 things the health care sector must do to improve cybersecurity. https://hbr.org/2017/06/11-things-the-health-care-sector-must-do-to-improve-cybersecurity, 2017. (Accessed on 12/25/2018).
  • [44] Charlie Osborne. Us hospital pays $55,000 to hackers after ransomware attack | zdnet. https://www.zdnet.com/article/us-hospital-pays-55000-to-ransomware-operators/, 2018. (Accessed on 01/04/2019).
  • [45] Healthcare IT News. The biggest healthcare data breaches of 2018 (so far). https://www.healthcareitnews.com/projects/biggest-healthcare-data-breaches-2018-so-far, 2019. (Accessed on 01/06/2019).
  • [46] Joseph Muniz and Aamir Lakhani. Penetration testing with raspberry pi. Packt Publishing Ltd, 2015.
  • [47] Mathy Vanhoef and Frank Piessens. Key reinstallation attacks: Forcing nonce reuse in wpa2. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1313–1328. ACM, 2017.
  • [48] Alfred NG. Security researchers find flaws in chips used in hospitals, factories and stores - cnet. https://www.cnet.com/news/security-researchers-find-flaws-in-chips-used-in-hospitals-factories-and-stores/, 2018. (Accessed on 01/04/2019).
  • [49] Rebecca Myers Robin Henry and Jonathan Corke. Hospitals to struggle for days | news | the sunday times. https://www.thetimes.co.uk/article/nhs-cyberattack-bitcoin-wannacry-hospitals-to-struggle-for-days-k0nhk7p2b, 2017. (Accessed on 01/04/2019).
  • [50] DHS. Philips isite/intellispace pacs vulnerabilities (update a), ics-cert. https://ics-cert.us-cert.gov/advisories/ICSMA-18-088-01, 2018. (Accessed on 12/24/2018).
  • [51] John E Dunn. Imagine you’re having a ct scan and malware alters the radiation levels – it’s doable • the register. https://www.theregister.co.uk/2018/04/11/hacking_medical_devices/, 2018. (Accessed on 01/04/2019).
  • [52] Heber MacMahon, David P Naidich, Jin Mo Goo, Kyung Soo Lee, Ann NC Leung, John R Mayo, Atul C Mehta, Yoshiharu Ohno, Charles A Powell, Mathias Prokop, et al. Guidelines for management of incidental pulmonary nodules detected on ct images: from the fleischner society 2017. Radiology, 284(1):228–243, 2017.
  • [53] Samuel G Armato III, Geoffrey McLennan, Luc Bidaut, Michael F McNitt-Gray, Charles R Meyer, Anthony P Reeves, Binsheng Zhao, Denise R Aberle, Claudia I Henschke, Eric A Hoffman, et al. The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. Medical physics, 38(2):915–931, 2011.
  • [54] Keelin Murphy, Bram van Ginneken, Arnold MR Schilham, BJ De Hoop, HA Gietema, and Mathias Prokop. A large-scale evaluation of automatic pulmonary nodule detection in chest ct using local image features and k-nearest-neighbour classification. Medical image analysis, 13(5):757–770, 2009.
  • [55] David Gunning. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2017.

Appendix A Additional Figures

Figure 8: The affect which histogram equalization has on emphasizing the features in a CT scan.
Figure 9: The average of 888 CT scans’ middle slices before scaling to 1:1:1 ratio. The darkest regions of the lungs are the best areas for an attacker to hard-code locations for injection. The reason so many different bodies overlap is because the CT scanner scales the scan according to the patient’s diameter.
Figure 10: Several slices taken from a CT scan with an injected cancer. The apex is in slice 93/130 with a diameter of 16.4mm.