Artificial Intelligence For Breast Cancer Detection: Trends Directions

10/03/2021 ∙ by Shahid Munir Shah, et al. ∙ 108

In the last decade, researchers working in the domain of computer vision and Artificial Intelligence (AI) have beefed up their efforts to come up with the automated framework that not only detects but also identifies stage of breast cancer. The reason for this surge in research activities in this direction are mainly due to advent of robust AI algorithms (deep learning), availability of hardware that can train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths, limitations and enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade, to detect breast cancer using various imaging modalities. Generally, in this article we have focused on to review frameworks that have reported results using mammograms as it is most widely used breast imaging modality that serves as first test that medical practitioners usually prescribe for the detection of breast cancer. Second reason of focusing on mammogram imaging modalities is the availability of its labeled datasets. Datasets availability is one of the most important aspect for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.



There are no comments yet.


page 3

page 5

page 6

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cancer is one of the most fatal disease and breast cancer is the most prevalent type of cancer and biggest cause of mortality among women [1]. According to the statistics published by the World Health Organization (WHO), out of 1,350,000 cases of breast cancer, there are 460,000 deaths each year worldwide [2]. Alone in the United States (US), 268,600 cases of breast cancer were reported in 2019, which is the record figure [3, 4].

Breast cancer occurs because of abnormal growth of cells in breast [5]. The anatomy of the breast is comprised of different blood vessels, connective tissues, milk ducts, lobules, and lymph vessels [6]

. A tumor is formed in milk ducts or lobules when breast tissues grow abnormally and cell division becomes uncontrolled. The developed tumor may either be benign or malignant. Benign tumors are produced because of the minor structural changes in breast and are classified as noncancerous tumors. On the other hand, malignant tumors are classified as cancerous tumors and are further categorized as invasive (invasive carcinoma) or non invasive (in-situ carcinoma)

[7]. Invasive breast tumors spread into surrounding organs and create complications [8], whereas, non-invasive tumors remain confined into their region and do not invade neighboring organs [9].

Early detection of breast tumor and its correct diagnosis as benign or malignant is critical to avoid its further advancement and complications. This way, a timely and effective treatment may be planned that in turn decreases the mortality rate caused by breast cancer.

Several imaging modalities are used for breast cancer detection. X-ray Mammography (MG) [10], Breast Thermography (BT) [11], Magnetic Resonance Imaging (MRI) [12], Positron Emission Tomography (PET), Computed Tomography (CT) [13], 3-D Ultrasound (US) [14] and Histopathology (HP) [15] are some of the popular imaging modalities used to diagnose and detect breast cancer in early stages.

Among these imaging modalities, breast mammograms are the most commonly used modality [16, 17, 18]. Mammograms are low-dose breast X-rays [19], which are easy to capture and often used as first test for breast cancer detection [20]. Mammography is also a popular method for breast cancer screening of large scale population [21, 22].

Traditionally, for breast cancer detection and diagnosis, radiologists observe breast images manually (through naked eyes) and after the consensus of other medical experts, finalize their decision. Manual inspection of breast images for possible breast cancer detection is a widely used method, however, certain inescapable facts related to manual inspection of images may lead to inaccurate detection and prolong the diagnosis process, for example:

  1. Unavailability of the experts in remote areas (under-developed countries).

  2. Unavailability of the experts with sufficient domain knowledge to precisely analyze multi-class images (images with possible multiple ailment characteristics).

  3. Inspecting large number of medical images on daily basis may be exhaustive and cumbersome practice.

  4. Subtle nature of the breast tumor and complex structure of breast tissues make manual analysis more difficult.

  5. Concentration level of medical experts and other fatigues make the diagnosis harder and a time taking process.

All such facts prolong the diagnosis and lead to false positive or false negative outcomes [23]

. There is always a need for the additional methods to increase efficiency and to decrease false prediction rate. Recently, Artificial Intelligence (AI) technology has made a great progress in the automated analysis of medical images for anomaly detection. The same is true for breast images for possible breast cancer detection

[24, 25, 26]. As compared to manual inspection, AI-based automated image analysis avoid tedious and time consuming screening process and efficiently captures valuable and relevant information from the massive image data.

AI algorithms (discussed in detail in Section 3.2) can be divided in two categories based on how they interpret data / extract information from the images:

  1. Algorithms based on handcrafted features, commonly known as conventional AI / conventional Machine Learning (ML) algorithms (term ML refers the study of AI algorithms that learn from experience without being explicitly programmed


  2. Algorithms that processes images and extract information from regions that emerge as salient based on mathematical optimization of classification. Such algorithms are commonly characterized as Deep Learning (DL) / Deep Neural Network (DNN) algorithms (DL is sub field of ML that refers the study of the AI algorithms that learn representations from data with multiple levels of abstraction

    [28]). The need of handcrafted features for DL algorithms is minimized and mostly non-existent as DL algorithms learn most salient representation of the data without intervention.

Each category of AI algorithms have shown remarkable progress in breast imaging analysis and breast cancer detection, however, DL algorithms have been found more promising as compared to conventional ML algorithms [29, 30] and have proved themselves to be the strong candidate for the ongoing medial imaging research, particularly of breast cancer imaging research.

Keeping in view the aforementioned discussion and progress made by AI algorithms in breast cancer detection, in this article, a critical review of breast imaging analysis using AI algorithms is presented. The review critically analyzes AI algorithms applied on different breast imaging modalities and compares their performance.

Following are the contributions of this article:

  1. This article presents critical analysis of commonly used breast imaging modalities. Limitations and strengths of different imaging modalities are also discussed.

  2. Datasets available for different imaging modalities are presented.

  3. Detail of popular DL architectures used for breast imaging analysis are presented along with results.

The article is structured as follows: Section 2 describes imaging modalities used to detect breast cancer. Section 3 presents the detail of AI algorithms used for breast cancer detection. This sections discusses progress made by conventional ML algorithm (refer Section 3.2.1) and DL algorithms (refer Section 3.2.2

) in the detection of breast cancer. In the last of this section, details on Convolutional Neural Network (CNN) (refer Section

3.2.3), a special architecture of DL used to learn data representation in images, is presented. Special emphasis on CNN is given as this architecture is able to achieve state-of-the-art results for breast cancer detection. Finally, Section 4 concludes the article with summary and future research directions.

2 Medical imaging modalities used for breast cancer detection

This section outlines popular imaging modalities used for breast cancer screening and detection. There are various imaging modalities used for this purpose (refer section 1 for the list of various breast imaging modalities), however, among them four modalities are more commonly used i.e. Mammograms, Ultrasound, Magnetic Resonance Imaging and Histopathology [31]. Other than individual use, these modalities are also used in various combinations called multimodalities (Refer Figure 1). Detail of each of the above mentioned modalities is provided below.

Figure 1: Imaging modalities and their sub-types used for breast cancer detection

2.1 Mammograms

As stated earlier, mammograms are the most commonly used images for investigating breast tissues for breast cancer detection. These are the low intensity X-ray images of human breast. Figure 2 shows basic structure of mammograms.

Figure 2: Mammographic X-Ray of Human Breast [32]

As indicated in Figure 2, in mammograms, cancer tumors and calcium concentration appear brighter as compared to the other masses. Therefore, diagnosis is easy if an experienced radiologist analyze these images or a well trained (trained through mammographic images) ML / DL model is used for the analysis.

Initially, mammograms were used in their simplest form i.e. Screen Film Mammography (SFM), however, because of technology advancement in images, mammograms were also converted into their advanced forms i.e. Full Field Digital Mammograms (FFDM), Digital Breast Tomosynthesis (DBT) and Contrast Enhanced Digital Mammograms (CEDM) (refer Figure 1). Each category of mammographic images has been widely accepted and used by the research community for breast lesion detection and classification [33, 34, 35, 36, 37, 38, 39]. One of the important use of mammographic images is to use them for Randomized Mammographic Trials / Randomized Controlled Trials (RMT / RCT). It is a method through which a large scale population is screened for breast cancer detection by analyzing their mammograms. RMT is a globally accepted method and it is the prime reason that mammographic breast images are often considered as the first opinion test for breast cancer detection.

Although, mammography (mammographic images based method) is a popular and commonly used method for screening and detecting breast cancer at early stages [40], in some cases, it is difficult to detect breast cancer at early stages using only mammography and additional screening tests are required along with mammographic trials / randomized mammographic trials [41]. Especially, in the developing countries where the healthcare infrastructures are deficient and resources are limited, conducting randomized mammographic trials and providing effective treatment are more difficult [42]. In such cases, breast self-examination (BSE) and clinical breast examination (CBE) are more feasible methods to detect breast cancer at early stages and to reduce mortality. Recent trials have shown that CBE screening successfully reduces diagnosis stages and provide improved breast cancer survival with longer follow-up [43]. One of the other potential benefit of BSE and CBE is the prevention from a substantial harm related to mammographic screening i.e. over diagnosis magnitude of mammographic X-rays, which is not exactly known.

Other than randomized trials for large population, mammography is also used for detecting breast cancer in individuals. However, in such cases, it is not preferred method because of its limited capability of detecting breast cancer in dense breasted women. Mammographic X-rays often misses to highlight cancerous tissues in young women that have dense breast tissues. In such cases, Automated Whole Breast Ultrasound (AWBU) / Sonography or other imaging techniques are recommended with mammographic X-rays to acquire a more detailed view of the breast tissues for thorough investigation [44]. Table 1 provides more detail about strengths and limitations of using mammograms for breast cancer detection.

2.2 Ultrasound

Although mammograms are considered the main modality for early breast cancer detection, these are not safe and certain health risks are associated with them for example, ionizing radiation risks for patients and radiologists and over dosage of radiation risks for patients [45]. Furthermore, these modalities lead a large population (65%-85%) to unnecessary biopsy operations (refer Section 2.4) because of low specificity [46, 47] (specificity is the test’s ability to correctly designate a subject without the disease as negative [48]). Such unnecessary biopsies increase the hospitalization cost for individuals as well as cause mental stress for them. Because of such limitations breast ultrasound imaging are considered much better option for breast cancer detection [49, 50].

As compared to mammograms, breast ultrasound imaging can increase 17% overall detection rate and decrease 40% overall unnecessary biopsy operations [49]. Breast ultrasound images are also known as sonograms in medical terminologies. Sonograms are commonly used for detecting the location of suspicious lesions i.e. Region of Interest (ROI) in breast. For automatically locating lesions, the ultrasound images are used in three broad combinations i.e. simple two dimensional grayscale ultrasound images, color ultrasound images along with shear wave elastography (SWE) added features and Nakagami colored ultrasound images [12, 51]. Elastography in ultrasound images is used to measure tissue stiffness for better differentiation between benign and malignant lesions. Similarly, Nakagami ultrasound images provide additional details of statistical parameters, which are advantageous in extracting ROI.

Figure 3 and Figure 4 show breast ultrasound images, clearly indicating that a simple breast ultrasound image is just a grayscale image in which irregular mass(es) appears as a big black spot (within the breast mass). However, in color ultrasound images (along with shear wave elastogrphy and Nakagami distribution), irregular masses appear in colors with clear distinguishing boundaries. By analyzing Figure 3 and Figure 4, it can be deduced that colored ultrasound breast images can better detect irregular masses in the breast and can clearly identify the ROI.

Figure 3: Left side: A simple 2D ultrasound image, right side: Shear-wave elastography image [51]
Figure 4: Left side: A simple 2D ultrasound image, right side: Ultrasound image with Nakagami distribution [52]

Other than grayscale, SWE and Nakagami ultrasound images, some other breast ultrasound imaging techniques have also been reported in literature i.e. color Doppler, power Doppler and 3D ultrasound images (refer Figure 1 for the list of some well known breast ultrasound imaging techniques reported in literature for breast cancer detection). Power and color Doppler as well as 3D-imaging are the additional ultrasound techniques, which are used to improve the detection accuracy of ultrasound based breast cancer detection [53, 54]. Studies have shown that 3D breast ultrasound is robust in identifying up to 30% more cancers in dense breasted women as compared to mammograms [55, 56].

Taking in consideration, the versatility, safety and high sensitivity, breast ultrasound imaging can be considered the better alternative to mammograms for breast cancer detection [57]. However, breast cancer lesion detection and classification using ultrasound imaging require radiologists’ expertise and experience. It is because of the complex nature of ultrasound images and presence of speckle noise [58]. Other than complex imaging structure, ultrasound images based screening in asymptomatic women causes unacceptable false positive and false negative outcomes [59]. Hence, there is a little evidence to support the use of breast ultrasound in breast cancer screening and detection. Table 1 provides more detail about the strengths and limitations of using ultrasound imaging technique for breast cancer detection.

2.3 Magnetic Resonance Imaging

As discussed earlier that randomized mammographic trials is the common imaging surveillance method for women to early detect breast cancer, specially the women who are at higher risks of developing breast cancer i.e. having strong family history of breast cancer development. Women with family history are at more higher risk than the others of developing the disease at younger age when the breast density is much higher as compared to older age women. Along with ionizing effects and other health risks, mammographic X-rays provide limitations in detecting breast cancer in dense breasted women i.e. more likely young women [60]. These factors limit the effectiveness of screening by mammography.

In contrast breast MRI provides higher sensitivity for breast cancer detection in dense breasts [61]. MRI uses magnetic field along with radio waves for capturing clear and detailed images of body soft tissues. It captures multiple breast images of a single subject (taken from different angles) and combines them together as a detailed view, therefore, breast MRI images provide more detailed view of breast soft tissues than mammograms, ultrasound or computed tomography scanned images [62]. Figure 5 presents a few samples of breast MRI images.

Figure 5: Example of breast MRI images [63]

As indicated in Figure 5, MRI images are more detailed images as compared to the other imaging modalities, therefore, it may detect lesions, which are not visible on other imaging modalities or could be considered as benign [64]. To enhance the image quality of the conventional MRI images (simple grayscale MRI images), usually a contrast agent is injected into the patient body prior to taking MRI images. This technique results in contrast enhanced MRI images, which provide radiologists the opportunity to examine breast tissues in detail with more clear view [65].

Since, MRI provide a very detailed view of soft tissues like human breast, it is usually suggested by the doctors or radiologists once the cancer has been diagnosed and further detailed information regarding extent of the disease is needed [66]. Sometimes MRI is advised to identify the suspicious breast tissues for biopsy purposes commonly known as MRI guided biopsy.

Although MRI provide high sensitivity [67], its use for breast cancer detection is limited because of high associated cost [68]. However, recently introduced techniques in MRI like Ultrafast breast MRI (UFMRI) and Diffusion-weighted imaging (DWI ) provide significantly higher screening specificity with shorter reading time and reduced overall cost [69, 70]. Figure 1 lists some of the popular MRI methods for breast cancer detection reported in the literature. Table 1 provides details on limitations and strengths of using MRI imaging for breast cancer detection.

2.4 Histopathologic Images

Histopathology refers to the procedure of taking out a piece of mass from suspicious human body region for testing and detailed analysis by the pathologists [71]. This process is often termed as biopsy in medical terminologies. For producing histopathology images, the biopsy samples are fixed across the glass slides stained with Haemotoxylin and Eosin (H & E) for the examination through an instrument like microscope [72]. The purpose of staining with H & E is to produce colored histopathologic(HP) images for better visualization and detailed analysis of the tissues. Different tissue structures are coloured with different stains for better conceptualization under the microscope.

HP images are available in two forms (1) Whole Slide Images (WSI) i.e. digital colored images and (2) image patches extracted from WSI i.e. ROI. Figure 6 presents a few samples of HP whole slide images.

Figure 6: Breast histopathologic whole slide images [73]

Stained slides are mostly converted in WSIs for the examination by the expert pathologists. Pathologist extract ROI patches from WSI with different zooming factors to diagnose multiple breast cancer types, which are impossible to diagnose with the help of simple grayscale images. Because of the tissue level examination, multiple researchers have successfully and accurately used HP images for multi-class breast cancer classification [74, 75, 76]. Breast cancer classification using HP images provide various advantages over mammograms and the other imaging modalities like Ultrasound and MRI. For example, HP images are able to classify breast cancer subtypes instead of just binary classes. Furthermore, a large number of ROI can be created using WSI images, which can better train ML / DL models for breast cancer subtype classification.

Although, HP images provide high authenticity for breast cancer classification especially breast cancer sub type classification, it has some drawbacks. For example, biopsy is an invasive method, requires long time to be converted into digital image form as well as require high expertise to distinguish among breast cancer sub-types. Furthermore, high color variations (because of the staining process), lab protocols, and scanner brightness in the development of HP images, complicate training a multiclass ML / DL model efficiently, especially when using borderline cases. Refer Table 1 for discussion on strengths and limitations of the HP imaging for breast cancer detection.

Modality Strengths Limitations Available Datasets
 Mammograms (MG)
  • Mammography is a widely used method for breast cancer screening and detection.

  • It is an easy and economical first opinion approach for breast cancer diagnosis [77].

  • Digital Mammography (DM) provides an efficient and cost effective solution to capture, store, and process the breast tissue images [78].

  • Digital Mammographic images serve as a database for training AI systems

  • DM does not require high expertise or professional knowledge to diagnose and categorize as compared to HP images

  • Mammographic images are produced from the low dose breast x-rays, therefore, provide limited capabilities in capturing micro-calcification in human breast because of their extremely small sizes and dispersed shape properties [79].

  • Mammography possess limitations in diagnosing breast cancer in dense breasts by missing cancerous tissues in dense tissues [80].

  • Mammography is not always accurate in diagnosing breast cancer and therefore some time additional testing may be required for accurate diagnoses [81].

 /info/mias.html DDSM
 /Mammography/Database.html INBreast BCDR CBIS-DDSM
 /display/Public/CBIS-DDSM MIAS
 Ultrasound (US)
  • Breast US captures breast images in real-time fashion, thus, provide flexibility to view breast lesion from different angles and orientations.

  • US provides reduced chances of false negative diagnosis as it is capable to capture breast images from different orientations.

  • US does not expose patients to any type of harmful radiations, therefore, are considered extremely safe specifically for the pregnant women [82].

  • US is also considered a safe solution for routine breast cancer screening.

  • US is particularly useful imaging modality for detecting breast cancer in dense breasts where mammography possess limitations [83]

  • As compared to mammograms, the quality of the US images is quite poor.

  • The quality of the US images degrades for the denser breasts.

  • Breast US can produce misleading diagnosis if the probe of the scanner is not moved or pressed properly [51].

  • US waves attenuate in human body muscles, therefore, are not able to clearly display the tumor contour in breast [84].

  • Extracting ROI for further investigation is difficult from US images.

 Magnetic Resonance Imaging (MRI)
  • Like Ultrasound, MRI also do not expose patients to any harmful ionizing radiations.

  • MRI images provide a detailed view of internal soft breast tissues and can capture micro classifications.

  • MRI can identify suspicious areas that can be further investigated with biopsy, known as MRI guided biopsy.

  • Dynamic Contrast Enhanced MRI (DCE-MRI) imaging provide more clear and detailed view of the soft breast tissues and hence more easily identify the affected breast regions than normal MRI [85].

  • MRI is an expensive method as compared to mammograms or ultrasound, therefore, are not commonly used for breast cancer screening.

  • Although, MRI provide a very detailed image of the internal breast tissues, still it can miss cancerous tissues that mammograms can detect [86].

  • MRI is mostly recommended as a second opinion test after the mammographic test has been conducted.

  • MRI is not generally recommended during pregnancy [87].

  • In order to enhance MRI images, contrast agents are usually injected, which may create allergies or other complications and hence not recommended for patients especially for kidney patients [88].

 /display/Public/RIDER+Breast+MRI Duke-Breast-Cancer-MRI
 Histopathology (HP)
  • HP images are colored tissue images that have capability to diagnose various type of cancers.

  • It is a better prognosis method for early stage breast cancer.

  • With the help of HP images, much more in-depth study of breast tissues is possible, therefore, more confident diagnosis of breast cancer is obtained as compared to any other imaging modalities.

  • From whole slide HP images, multiple ROI images can be created, which in turns provide more chances to detect cancer tissues thus, reduce FN rate.

  • HP images are taken through breast biopsy, which is an invasive method, therefore, has high associated risks as compared to the other imaging modalities.

  • HP images are difficult to analyze and for their accurate analysis, highly experienced, knowledgeable and expert professional pathologists are needed.

  • Manual analysis of HP images is a highly time taken process [89].

  • High care is needed during biopsy sample preparation (tissue sample extraction from breast, fixation of tissue samples on microscopic slides and management of color variations originated because of different staining procedures) to decrease the chance of any false diagnosis.

  • During analysis, HP images must be carefully dealt by the pathologist to accurately diagnose breast cancer. For example, correct slides orientation, pathologist’s attention, and understanding of color variations in stained images [90].

 Wisconsin+(Diagnostic. BICBH
 /nis-2017-003 BreakHis
Table 1: Strengths and limitations of different imaging modalities for breast cancer detection

3 AI in Medical Image Analysis

In recent years, AI as a multidisciplinary approach, has grown nearly into every business to maximize productivity, efficiency, and accuracy [91]. Advance computing resources, massive amount of available data and outstanding algorithms’ performance have made AI technology more capable and result oriented as compared to before. Apart from healthcare domain it is used in various application areas, i.e. network intrusion detection [92], image synthesis [93], optical character recognition (OCR) [94], facial expression recognition [95] etc.

Within the healthcare domain, AI technology is now being used in many important applications like remote patient monitoring, virtual assistance, hospital management and drug discovery etc. [96, 97]. Particularly, in medical image analysis and diagnostics, AI is successfully contributing in recognizing complex imaging patterns from the imaging data to provide a better quantitative assessment in an automated and robust manner. Various radiological imaging related tasks i.e. risk assessment, disease detection, diagnosis or prognosis, and therapy response [98] are now being accomplished more accurately and easily by integrating AI as a tool to assist radiologists and physicians.

3.1 Why to use AI for medical image analysis

AI for medical applications use medical imaging (radiology) data to provide better and timely healthcare services [99]. Primary purpose of using AI in medical imaging is to achieve greater efficacy and efficiency in routine clinical practices and to get support in decision making process.

Through the wide adoption of Electronic Health Records (EHRs) system [100], radiological imaging data is continually growing in size and hence, it is now beyond the capacity of the radiologists to analyze such huge amount of imaging data manually. Studies reveal that, in some cases, a radiologist has to analyze on average one radiology image every 3 to 4 seconds in an 8-hour workday to meet workload requirements [101]. Further growth of medical imaging data with disproportionate rate makes this workload requirement even more worse. As the analysis of radiological imaging data involves high visual perception and robust cognitive abilities, decisions of radiologists related to medical imaging remains uncertain [102]. Under unmanageable and constrained workload conditions, uncertainty in decisions further increases. Use of AI for medical imaging analysis decrease the uncertainty and errors in decision making by providing radiologists the opportunity of pre-screened images with identified features. All such human constraints and advancement in both imaging data & computing resources motivated the healthcare providers to use AI technology in medical imaging field.

3.2 AI Algorithms for medical imaging

There are two categories of AI algorithms i.e. the algorithms that use handcrafted features and the algorithms that process raw data. The first category of the AI algorithms belong to the traditional / conventional AI, whereas the the second category of the AI algorithms belong to the recent DL approaches. The same has been shown in Figure 7. Refer to Figure 8, for graphical representation of types and sub-types of algorithms being employed for various applications (including healthcare applications) for making or supporting predication.

Figure 7: ML and DL based approaches for medical imaging data. First row of image denotes standard pipeline used by traditional / conventional AI / Machine Learning (ML) algorithms. Second row of image represents Convolutional Neural Network (CNN) architecture, special case of Deep Learning (DL).
Figure 8: Categories of AI algorithms. Acronyms used in the figure: AI= Artificial Intelligence, ML = Machine Learning & ANN= Artificial Neural Network.

3.2.1 Conventional Machine Learning (ML) algorithms

Machine Learning (ML) is a term first coined by Arthur Samuel in 1959 who described it as sub field of AI [27]. It is a technique that recognizes patterns from given inputs, learn those patterns without being explicitly programmed and solve the problems based on inputs [103] [104] [105]. In case of medical imaging, manually extracted handcrafted engineered image features (defined in terms of distinguishing image characteristics like area, shapes, region of interest, texture, and histogram of image pixels from the input medical images [106]

) serve as input to ML algorithms. The extracted features may further be processed through features selection algorithms to select most relevant features

[92] and finally, the task of ML algorithms is to merge the selected input features as a single value such as a tumor signature that might be representing a likelihood of a disease state [107] (refer Figure 7 for more details).

ML algorithms are categorized into two major types i.e. supervised learning and unsupervised learning algorithms. These two types are further classified into classification, regression (supervised), and clustering (unsupervised) approaches based on the output generated by the algorithms (refer Figure


In supervised learning, the algorithm is trained using labeled data, meaning algorithm learns from prior knowledge / from given probability distribution of classes (in our case prediction of disease). On the other hand, as the name suggests, in the unsupervised learning, the algorithm is not provided with labeled data and it learns pattern from the data and associates them with novel discovered clusters of data points

[108, 109].

Various handcrafted features based ML algorithms have been used for the breast cancer detection and analysis. Yassin et al. [110]

in their review have outlined different conventional ML algorithms employed in recent past for the breast cancer diagnosis. These algorithms include but not limited to Decision Tree (DT), Random Forest (RF), Support Vector Machines (SVM), Naive Bayes (NB), K-Nearest Neighbor (KNN), Linear Discriminant Analysis (LDA), and Logistic Regression (LR)

[111, 112].

Although, the handcrafted features are considered to be discriminative in nature and ML algorithms based on such features, in some cases, have attained remarkable results in medical imaging domain [113, 114, 115, 116, 117, 118, 119, 120], however, extracting such features is always a challenging task because these features require expert domain knowledge to extract and an exhaustive reprocessing is required to make them suitable as an input to ML algorithms [121].

Furthermore, dependency of such features on expert definitions may miss some contributing factors, which make them imperfect for precise diagnostic process [30]

. Since, the handcrafted features are extracted using feature extracting algorithms (like Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), Graph Run Length Matrix (GRLM), and Histogram of Gradient (HOG) etc.), and these algorithms mostly focus on one aspect of the image (i.e. texture or edge of an image), they may fail with different types of images (for example, a feature extracting algorithm that extracts shape-based features may not be able to extract the features containing texture information). Because of such limitations of extracting inadequate information, systems based on such features are non-adaptable to unknown data (other than the data elements on which the system was trained) and show inability in providing discriminative analysis. Hence, there is always a need of the classifier for making classification decisions to handle the acquired feature space. However, selecting an appropriate classifier is always a complicated task


3.2.2 Deep Learning (DL) based AI algorithms

To address the limitations possessed by ML algorithms, recent research is directed towards DL based approaches. DL algorithms process raw data and do not require any manual and explicit extraction of features from the input data to produce output / prediction. These algorithms have the ability to adapt to all kinds of features from input data (image, text, audio, etc.), therefore, provide more robust results in segmentation and classification problems as compared to the conventional ML methods [122, 123, 124, 125, 126]. DL algorithms have also been successfully applied for the evaluation and analysis of medical imaging like CT, MRI, US and HP [127, 128, 129].

DL algorithms have been originated from Artificial Neural Network (ANN), which is a subset of ML algorithms (refer Figure 8

for more detail on taxonomy of algorithms). ANN is the network of artificial neurons that imitates the simple working of brain / biological neurons

[130]. The artificial neuron is the basic building block of ANN. Figure 9

depicts the basic structure of an artificial neuron, which shows that weighted inputs are provided to a neuron for producing an output or prediction. The neuron sums the received inputs and applies a nonlinear activation function (Sigmoid, Tanh, Gaussian or ReLu etc.) to get the output response (usually between 0 and 1)


Figure 9: Basic Structure of a Neuron.
Figure 10: A schematic of Illustration of Artificial Neural Network

In ANN, nodes(neurons) are connected to each other in the form of single or multiple hierarchical layers and can send and receive signals as shown in Figure 10

. The response (rejection or acceptance) of these send and receive signals is dependent upon the nonlinear activation function output. With the input of each neuron or node in a network, a weight is associated that may affect the given input and is useful to transfer the data to output layer. The input layer transmits inputs / data in form of feature vector with a weighted value to hidden layer. The hidden layer, is composed of activation units, carries the features vector from first layer with weighted value and performs calculations as output. The output layer is made up of activation units, each corresponding to label / classes present in the dataset, carrying weighted output of hidden layer and predicts the corresponding class

[131]. ANN utilizes the functionality of back-propagation [132] during training phase to reduce the error function. The error is reduced by updating weight values in each layer.

For breast cancer classification, mainly two types of ANN have been used i.e. Shallow Neural Network (SSN) [133] and Deep Neural Network (DNN) [134] (refer Figure 11 for the detail). Among these two neural networks, DNN is the most popular type of ANN that has been extensively used for medical image analysis and diagnosis [135]. DNN has the ability to automatically extract relevant features from the raw input data without any expert knowledge and any human intervention. Due to aforementioned characteristics, DL approaches provide substantial improvements in diagnostic, analysis and clinical decision-making processes using medical imaging data.

Figure 11: Types of Artificial Neural Network used for Breast Cancer Classification.

After getting state-of-the-art results in the domain of image analysis, multiple variants of DNNs have been employed for breast cancer classification in literature. Multi-Layer Network (ML-NN) [136, 137]

, Deep Belief Network (DBN)

[138, 139]

, Principal Component Analysis. Net (PCA.Net)


, Stacked Denoising Autoencoder (SDAE)

[127], and various variations of Convolutional Neural Network (CNN) [141, 142, 143, 144] are some of the variants of DNN that have been used for breast cancer classification using breast imaging data. Every DL algorithm possesses some strengths and limitations when used for medical imaging analysis. Mostly, in literature some variants of CNN, due to robust performance, is used to analyze breast images for cancer detection purpose.

Table 2 summarizes strengths and limitations of DNN models when employed for medical imaging analysis.

Model Description Advantages Limitations
 SNN [133]  Single hidden layer feed forward network
  • Due to small network size fewer computing resources and less training time and memory required.

  • Optimization of hyper-parameters for training is not very challenging.

  • Ability to produce reasonable performance in case of small dataset.

  • Generalization performance can be increased by increasing hidden layers.

  • Good performance on low dimensional data

  • Poor performance on high dimensional data.

  • Low performance to solve multi-class problems.

  • Not easy to generalize the predictive result.

  • Offer computational complexity due to large network and many hyper-parameters.

  • Optimization of hyper-parameters for training is very challenging.

  • Requirement of large datasets to train and attain good performance.

 SDAE [127]  This model extract discriminant representative hidden patterns from data using intrinsic data reconstruction method
  • Possess noise reduction ability.

  • Discriminative hidden pattern can be extracted using data reconstruction mechanism.

  • Easy regularization and optimization of training parameters.

  • Auto noise elimination ability helps to extract relevant features.

  • Poor performance on low dimensional data.

  • Possessing poor correlation among the dimensions

  • Auto noise reduction property sometimes not good for low dimensional data

 DBN [138, 139]

 Generative model comprises of several layers that follow greedy layer wise feature learning and training. All hidden layers are trained one layer at a time. It can be work as semi-supervised learning

  • Layer-by-layer training practice helpful to improve feature generalization.

  • Easy to optimize hyper-parameter of each layer.

  • Layers of the network can be trained in un-supervised manner which is very demanding in BrC classification.

  • Show better training performance in case of small annotated images.

  • DBN cannot track the loss.

 CNN (De novo) [143]  Different layers (input, convolutional, pooling, fully-connected layers) are hierarchal arrange in customizable manner. Model is trained from scratch in supervised manner.
  • Customized model can be developed according to type and number of images.

  • Preferred when source images are not enough for training.

  • In case of datasets of different domains, it is difficult to optimize the model training.

  • Hard to solve multi-class problems if available number images are small

 CNN (Pre-trained) [144]

 Model is trained using TL (transfer learning) using pre-trained available networks. Network’s layers are same as CNN (De novo) such as input, convolutional, pooling and FC-layers but in different hierarchal arrangement.

  • Model can be trained quicker than de novo in case of less available resources.

  • Show reasonable performance in case of small target data.

  • Produces unreliable results in case of very small datasets

  • Hard to handle and optimize in case of newly appended layers

Table 2: Strengths and limitations of DL algorithms used for medical imaging analysis

Among the aforementioned DL architectures, CNN is the most powerful, effective and extensively used for medical imaging analysis particularly breast imaging analysis [145]. In the subsequent subsections, CNN architecture is discussed in detail in connection with the breast imaging analysis and breast cancer detection.

3.2.3 Convolutional Neural Network (CNN) for breast imaging analysis

As discussed, CNNs are sub-type of deep, feed forward neural networks that have shown robust results for applications involving visual input i.e image

[146]. Unlike a typical ANN, CNN architecture can take entire data (i.e. image) as an input. In order to train CNN architectures, usually two techniques are used i.e., transfer learning (TL) and de novo (refer Figure 11 for the detail). In de novo technique, CNN architecture is trained from the scratch and different trained models are combined to obtain the optimal model, while, in transfer learning method, it adopts the pre-trained models for analysis [146]. Transfer learning is a convenient method, however, it mostly deals with limited or small datasets [147].

CNN architecture comprises of hierarchical layers with trainable filters (convolution operation) and pooling operations (reduce the size of the image representation) that capture discriminative features from raw input data to enhance classification performance. Lastly, classification layer computes the probability / score of learned classes from the extracted features

[28]. The illustration of deep CNN (DCNN i.e. CNN with many convolutional and pooling layers) applied to breast images classification (for breast cancer detection) is provided in Figure 12.

Figure 12: The architecture of deep CNN applied to breast cancer detection and classification [12]. Acronyms used in the figure: Conv. Layer= Convolutional layer, MaxPool= Maximum value pooling and FC Layer= Fully connected layer. Dropout layer is used to reduce overfitting the data. Fully-connected layers are used after multiple convolutional and pooling layers. All neurons of these layers are connected to all neuron of next layer. It helps to aggregate low level features / information to extract large patterns from input data. Softmax function is used in the last layer to calculate probability of occurrence of an event that is present in the training data i.e. malignant or benign tumor or normal breast image (without any tumor).

As shown in Figure 12, for analysis of breast imaging (or any other medical imaging), the input layer in CNN architecture accepts an input image. Convolutional layers are very important layers that contain convolutional filters known as kernels. These layers use convolution operations to capture the high-level features such as edges, colors, shapes and blobs. The number of layers and convolutional filters are hierarchically arranged in such a way that level of extraction from low-level features to high-level features increases as the depth of layers increases.

3.2.4 Popular CNN architectures for breast imaging analysis

Studies have revealed that CNN-based models have enhanced the diagnosis accuracy and reduced the false detection rate when used for cancer detection using breast imaging data [148]. Using different combinations of parameters and hyper-parameters (context of depth model), various Transfer Learning (TL) based CNN architectures have been employed for breast imaging analysis such as LeNet [149], AlexNet [142], GoogLeNet [150], VGG [151], CiFarNet [152], Inception [125], Inception v4 [153] and ResNet [154] etc. All such CNN architectures have reported robust results when applied on different breast imaging datasets.

Table 3 presents summary of selected recent articles, published in last 4-5 years, that have used deep learning methods (particularly CNN based frameworks) to detect breast cancer. One commonality of all these articles is that almost all (except few) have presented results using mammograms dataset to prove robustness of proposed framework.

Further, Table 3 provides detailed information on CNN based architectures / frameworks in terms of training data, network architecture, and performance assessment using different evaluation matrices. Most of the existing schemes are proposed with different arrangements of input size, network depth, and the number of filters. The training and testing datasets utilized by such models may be public or private.

Research Team Year Task Performed Data Source # of Images ANN Type and Architecture Size of Input Evaluation Results
 Fonseca et al. [155]  2015  Classification: Leision Density  Medical centres in Lima, Peru  1157 mammograms  CNN: 3Conv Layer + SVM as classifier  200 x 200  Area under the curve : 0.73
 Carneiro et al. [156]  2015  Classification: mass tissue  DDSM and Inbreast  680 images (DDSM) and 410 (Inbreast)  CNN: 4 Conv Layer+ 2 FC + Softmax Classifier  264 x 264  Area under the curve : 0.91
 Z. Jiao et al. [157]  2016  Classification: benign and malignant  DDSM  600 [300 each of benign andmalinant]  DeepCNN: 1 input + 5 Conv Layer + 3 Fully Connected  227 x 227  Accuracy : 0.967
 Huynh et al. [158]  2016  Classification: Benign and Malignant  FFDM  219  CNN: 5 Conv Layers + 3 FC + SVM Classifier  256 x 256  Area under the curve : 0.81
 Suzuki et al. [159]  2016  Classification: mass and normal region  DDSM  198  DCNN: 3 Conv Layer+2 FC+ Softmax Classifier  224 x 224  SN: 89.0
 Jiao et al. [157]  2016  Mass classification: benign and malignant  DDSM  600  CNN: 5 Conv Layers + 2 FC + SVM Classifier  227 x 227  Accuracy: 0.967
 samala et al. [160]  2016  Mass detection  DBT volumes from University of South Florida  2282 digitised films and mammograms  CNN: 4 Conv Layer+ 3 FC  128 x 128  Area under the curve : 0.80
 J. Arevalo et al. [161]  2016  Classification: benign and malignant  BCDR- F03  736 (426 benign, 310 malignant lesions)  CNN: 2 Conv Layer+1 FC+ Softmax Classifier  150 x 150  AUC: 0.82
 K. K. Smala et al. [162]  2017  Classification: benign and malignant  SFM, DM  2242 (1057 malignant, 1397 benign)  DeepCNN: 5 Conv Layer + 5 Fully Connected  256 x 256  Area under the curve : 0.82
 I. Kumar et al. [163]  2017  Classification: Breast Density  MIAS, CBIS-INBreast  480 MLO view digitized screen film  CNN  128 x 128  Accuracy = 0.57, Area under the curve : 0.77
 Kooi et al. [164]  2017  Detection of leisions  FFDM  45000 images (44090 for testing + 18182 for Testing  DCNN: 5 Conv+2 FC + Softmax Classifier  250 x 250  Acc: 0.85
 Li et al. [165]  2017  Classification into benign and malignant  FFDM  456  CNN: 3Conv Layer+2 FC + SVM as classifier  224 x 224  Auc: 86.00
 Ahn et al. [166]  2017  Classification: non-dense, dense  FFDM  397  CNN: Transfer Learning based CNN model  256 x 256  Corelation Coeff.:0.96
 Dhungel et al. [167]  2017  Classification: Benign and Malignant  Procas  410  Cascaded deep learning model: Lenet+RNN  40 x 40  Acc: 85.0, SP: 70.0, SN: 98.0
 Mugahed et al. [168]  2018  Classification: Benign, normal and Malignant  Inbreast  410  DCNN: Alexnet  227 x 227

 Acc:89.91, AUC: 94.78, SN: 95.64, F-score: 96.84

 Wu et al. [169]  2018  Classification: non-dense, dense  Private  200000  Transfer Learning based DNN Model  224 x 224  Auc: 93.40
 Xu et al. [170]  2018  Classification: Low dense and high dense tissue  Inbreast  410  Resnet:36 weighted layers, 3 stages with 7 residual learning block  224x224  Acc: 96.80
 Zhu et al. [171]  2018  Classification: Malignant, Benign  DDSM-BRCP  316  Fully Convolutional Netwok + CRF:4 FCN (each with 3 layers with multi-scale kernels)  40x40  Dice Score: 0.97
 Y. ShaoDe et al. [172]  2018  Classification: breast lesion diagnosis  BCDR-F03  736 ((230 benign and 176 malignant)  SCNN: 1 Conv+ 1 Pool+ 1 FC  128 x 128  Acc: 73.0, AUC: 0.82
 D. Ribli et al. [173]  2018  Classification: Malignant, Benign  INbreast  410 ((masses, calcifications, asymmetries, and distortions)  Faster R-CNN: VGG16  2100 x 1700  AUC: 0.95
 M. A. Al-Masni et al. [174]  2018  Classification and Detection Simultaneously: Malignant, Benign  DDSM  600 (benign and malignant)  CNN: Conv24+ Pool1+ FC2  7 x 7  Acc: 97.0
 H. Chougrad et al. [175]  2018  mass lesion classification  DDSM  5316 (benign and malignant)  InceptionV3: Conv5+ Pool3  224x224  Acc: 97.35, AUC: 0.98
 F. F. Ting et al. [176]  2018  Classification: Benign and Malignant and Normal  MIAS  221 (21 benign, 17 malignant, 183 normal)  CNN: 5 Conv+2 FC  128 x 128  Acc: 90.5, SP: 90.71, SN: 89.47, AUC: 0.901
 K. Mendal et al. [177]  2019  Classification: Benign and Malignant  FFDM  78  CNN: VGG19  224 x 224  AUC:0.81
 Basile et al. [37]  2019  Classification: Benign and Malignant  BCDR  364  Deep Lerning method  200 x 200  SN:92.89
 Ragab et al. [178]  2019  Classification: Benign, normal and Malignant  DDSM  5257 images  DCNN: (Alexnet[5 Conv+2 FC] + SVM Classifier)  227 x 227  Acc: 87.20, AUC: 94.0
 Ionescu et al. [179]  2019  Classification: Cancerous, non- cancerous  PROCAS  73128  DCNN:4-conv2D layer with 1-fc layer  640 x 512  AUC: 61.00
 Singh et al. [180]  2020  Shape Classification: Irregular, lobular, oval, round  Inbreast, DDSM  410 (inbreast), 1168 (DDSM)  GAN+CNN: 3 Conv Layer+2 FC+ Softmax Classifier  64 x 64  Acc: 83.0
 Costa et al. [181]  2020  Classification: Detection of architectural distortion  Private FFDM  280  VGG-16:13-Con2D,4 pooling, 3 Dense  224 x 224  AuC: 0.89
 Abhijeet et al. [182]  2021  Classification: Normal, benign, malignant  MIAS  322  DCNN+RPN:4-con2d (Dense+Relu+batchNorm+Dropout)+Softmax  256 x 256  Dice Score: 0.97
 altaf et al.[183]  2021  Classification: Normal, abnormal benign, abnormal malignancy  DDMS, Inbreast, BCDR  900(DDMS), 300(Inbreast), 450 (BCDR)  Transfer Learning Based PCNN (Pulse-Coupled Neural Network)+DCNN  224 x 224  Acc: 98.72 (DDMS), 97.5 (InBreast), 96.30
 A. Khamparia et al [184]  2021  Classification: Normal, abnormal benign, abnormal malignancy  DDMS patch  10713

 MVGG16 + ImageNet

 224 x 224  Acc: 94.3, AUC: 0.933
Table 3: Performance analysis of different variants of deep learning based models for breast cancer detection

If we analyze the Table-3, we can conclude that different researchers utilized the deep convolutional neural network (DCNN) to perform different tasks for the breast cancer analysis as DCNN architectures achieve state-of-the-art results. Some research studies focused on the classification of structure and geometry of breast tissues such as dense, non-dense, irregular, lobular, oval, and round shape. Some of the researchers classified the abnormal tissue structure as malignant and benign. We can also notice from the table that the performance of the DNN mostly depends on the deep architecture, availability of large datasets, high computing resources, and large training input data dimensions. Though, DNN achieved remarkable results in the domain of medical image analysis, however, availability of annotated large medical image datasets and computing resources are still the main requirements.

4 Conclusion & Future Directions

Breast cancer is a critical public health problem and one of the major cause of mortality in women. Early diagnosis and detection, proper control mechanism and cure is necessary to reduce the mortality caused by this deadly disease. Several popular imaging modalities like Mammograms, Ultrasound, Magnetic Resonance Imaging and Histopatholic images among many others are used for breast cancer diagnosis. Traditionally, pathologists’ / radiologists’ observe breast images manually and with the consensus of the other medical experts, finalize their decisions. However, observing large number of breast images manually, for possible breast cancer diagnosis is a cumbersome and time taken process, which often leads to false positive or false negative outcomes. Hence, there is always a need of an automated system to speed up the image analysis process and to help radiologists’ in early diagnosing breast cancer. Such automated systems provide radiologists the opportunity of second opinion and hence, they can make more strong, reliable and accurate decisions regarding breast cancer diagnosis.

Keeping in view the importance of the automated systems for the diagnosis of breast cancer using breast imaging, here in this research study, we have provided a complete road map for the readers to fully understand the working mechanism of AI based automated breast cancer detection systems. We begin our research form describing the basic and the most important imaging modalities, which are being extensively used for breast cancer diagnosis. Along with the comprehensive detail of each imaging modality, its strengths and limitations have also been provided to give readers a broader idea about these imaging modalities. Furthermore, some of the popular available data sets of each modality have also been outlined along with their links so that in case of further research readers can easily access the relevant databases through the given links.

In order to design AI based automated systems for breast imaging analysis, the basic understating of AI algorithms is essential. In this research work, we have provided the basic understanding of AI algorithms. Since, AI algorithms have been broadly divided into handcrafted features based algorithms (conventional AI / ML algorithms) and those, which learn features representations from the raw input data automatically (Deep Learning (DL) algorithms). Both the categories of the AI algorithms have been described in detail to develop the reader’s understanding about AI algorithms. Deep Learning algorithms are more popular among the research community to analyze medical imaging data, therefore, an extensive insight of these algorithms has been provided along with their strengths and limitations in connection with breast imaging analysis.

Finally, we diverted our attention towards Convolutions Neural Network (CNN), which is one of the most popular and frequently used DL architecture for medical imaging analysis (particularly breast imaging analysis). Along with the theatrical detail of CNN, this research provides the comprehensive detail of the most recently employed CNN based architectures for breast imaging analysis along with the detail of the data sets used and results obtained by the CNN architectures using these data sets.

Although, AI is playing a significant role in the development of reliable automated systems for medical image analysis and disease predictions, there are still many issues to be addressed before AI can eventually influence clinical practices. One of the main issues is the limited availability of comprehensive and fully labeled datasets and the need for solid ethical regulations. In addition, AI algorithms (particularly DL algorithms) are “black box” in nature i.e. there is no proper justification or explanation of the decisions / predictions made by these algorithms. In medical diagnosis (like breast cancer diagnosis), black box decisions (like the recommendations / decisions made by the AI based automated systems) are usually not preferred as the radiologists’ / physicians’ are mainly interested to know and understand how the decision was made and on what factors it was taken [185]. Based on the less explainable nature and several other factors like losing control over autonomous decision-making, the automated AI based systems for disease prediction / diagnosis (decision support systems) are often regarded as threat or loss of control by the physicians [186].

Explainability of the AI algorithms was well ranked by the physicians many years ago [187] by considering it one of the most important feature of the AI based automated decision support systems. Today, many researchers like Villani report on artificial intelligence in France [188] and many others recommend “open the black box of artificial intelligence”, and to focus on the use of interpretable models for making high stakes decisions [189].

Hence, in order to gain the physicians’ confidence on the decisions made by AI algorithms and to justify the reliability of the decisions, a proper explanation of the decisions is needed specifically when these algorithms are used for predicting a disease. This also opens up the research area of “Ethical AI”. It is therefore the ultimate responsibility of the research community to make the AI algorithms fully explainable and interpretable so that these systems could be regarded as the strong candidates of affecting decision making for possible disease predictions. It will help in widely embedding AI technology in clinical care applications.


  • Anastasiadi et al. [2017] Zoi Anastasiadi, Georgios D Lianos, Eleftheria Ignatiadou, Haralampos V Harissis, and Michail Mitsis. Breast cancer in young women: an overview. Updates in surgery, 69(3):313–317, 2017.
  • Network et al. [2012] Cancer Genome Atlas Network et al. Comprehensive molecular portraits of human breast tumours. Nature, 490(7418):61, 2012.
  • DeSantis et al. [2019] Carol E DeSantis, Jiemin Ma, Mia M Gaudet, Lisa A Newman, Kimberly D Miller, Ann Goding Sauer, Ahmedin Jemal, and Rebecca L Siegel. Breast cancer statistics, 2019. CA: a cancer journal for clinicians, 69(6):438–451, 2019.
  • Man et al. [2020] Rui Man, Ping Yang, and Bowen Xu.

    Classification of breast cancer histopathological images using discriminative patches screened by generative adversarial networks.

    IEEE Access, 8:155362–155377, 2020.
  • Mambou et al. [2018] Sebastien Jean Mambou, Petra Maresova, Ondrej Krejcar, Ali Selamat, and Kamil Kuca. Breast cancer detection using infrared thermal imaging and a deep learning model. Sensors, 18(9):2799, 2018.
  • Mahmood et al. [2020] Tariq Mahmood, Jianqiang Li, Yan Pei, Faheem Akhtar, Azhar Imran, and Khalil Ur Rehman. A brief survey on breast cancer diagnostic with deep learning schemes using multi-image modalities. IEEE Access, 8:165779–165809, 2020.
  • Chiao et al. [2019] Jui-Ying Chiao, Kuan-Yung Chen, Ken Ying-Kai Liao, Po-Hsin Hsieh, Geoffrey Zhang, and Tzung-Chi Huang. Detection and classification the breast tumors using mask r-cnn on sonograms. Medicine, 98(19), 2019.
  • Cruz-Roa et al. [2017] Angel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Ganesan, Natalie NC Shih, John Tomaszewski, Fabio A González, and Anant Madabhushi. Accurate and reproducible invasive breast cancer detection in whole-slide images: A deep learning approach for quantifying tumor extent. Scientific reports, 7(1):1–14, 2017.
  • Richie and Swanson [2003] Rodney C Richie and John O Swanson. Breast cancer: a review of the literature. Journal of Insurance Medicine, 35(2):85–101, 2003.
  • Moghbel et al. [2019] Mehrdad Moghbel, Chia Yee Ooi, Nordinah Ismail, Yuan Wen Hau, and Nogol Memari. A review of breast boundary and pectoral muscle segmentation methods in computer-aided detection/diagnosis of breast mammography. Artificial Intelligence Review, pages 1–46, 2019.
  • Moghbel and Mashohor [2013] Mehrdad Moghbel and Syamsiah Mashohor. A review of computer assisted detection/diagnosis (cad) in breast thermography for breast cancer detection. Artificial Intelligence Review, 39(4):305–313, 2013.
  • Murtaza et al. [2019] Ghulam Murtaza, Liyana Shuib, Ainuddin Wahid Abdul Wahab, Ghulam Mujtaba, Henry Friday Nweke, Mohammed Ali Al-garadi, Fariha Zulfiqar, Ghulam Raza, and Nor Aniza Azmi. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artificial Intelligence Review, pages 1–66, 2019.
  • Domingues et al. [2020] Ines Domingues, Gisele Pereira, Pedro Martins, Hugo Duarte, Joao Santos, and Pedro Henriques Abreu. Using deep learning techniques in medical imaging: a systematic review of applications on ct and pet. Artificial Intelligence Review, 53(6):4093–4160, 2020.
  • Kozegar et al. [2019] Ehsan Kozegar, Mohsen Soryani, Hamid Behnam, Masoumeh Salamati, and Tao Tan. Computer aided detection in automated 3-d breast ultrasound images: a survey. Artificial Intelligence Review, pages 1–23, 2019.
  • Saha et al. [2018] Monjoy Saha, Chandan Chakraborty, and Daniel Racoceanu. Efficient deep learning model for mitosis detection using breast histopathology images. Computerized Medical Imaging and Graphics, 64:29–40, 2018.
  • Cheng et al. [2003] Heng-Da Cheng, Xiaopeng Cai, Xiaowei Chen, Liming Hu, and Xueling Lou. Computer-aided detection and classification of microcalcifications in mammograms: a survey. Pattern recognition, 36(12):2967–2991, 2003.
  • Cheng et al. [2006] Heng-Da Cheng, XJ Shi, Rui Min, LM Hu, XP Cai, and HN Du. Approaches for automated detection and classification of masses in mammograms. Pattern recognition, 39(4):646–668, 2006.
  • Suh et al. [2020] Yong Joon Suh, Jaewon Jung, and Bum-Joo Cho. Automated breast cancer detection in digital mammograms of various densities via deep learning. Journal of personalized medicine, 10(4):211, 2020.
  • Mohamed et al. [2018] Aly A Mohamed, Wendie A Berg, Hong Peng, Yahong Luo, Rachel C Jankowitz, and Shandong Wu. A deep learning method for classifying mammographic breast density categories. Medical physics, 45(1):314–321, 2018.
  • Mehmood et al. [2021] Mavra Mehmood, Ember Ayub, Fahad Ahmad, Madallah Alruwaili, Ziyad A Alrowaili, Saad Alanazi, Mamoona Humayun, Muhammad Rizwan, Shahid Naseem, and Tahir Alyas. Machine learning enabled early detection of breast cancer by structural analysis of mammograms. Computers, Materials and Continua, 67(1):641–657, 2021.
  • Van Ourti et al. [2020] Tom Van Ourti, Owen O’Donnell, Hale Koç, Jacques Fracheboud, and Harry J de Koning. Effect of screening mammography on breast cancer mortality: Quasi-experimental evidence from rollout of the dutch population-based program with 17-year follow-up of a cohort. International journal of cancer, 146(8):2201–2208, 2020.
  • Hong et al. [2020] Seri Hong, Soo Yeon Song, Boyoung Park, Mina Suh, Kui Son Choi, Seung Eun Jung, Min Jung Kim, Eun Hye Lee, Chan Wha Lee, and Jae Kwan Jun. Effect of digital mammography for breast cancer screening: a comparative study of more than 8 million korean women. Radiology, 294(2):247–255, 2020.
  • Motlagh et al. [2018] Mehdi Habibzadeh Motlagh, Mahboobeh Jannesari, HamidReza Aboulkheyr, Pegah Khosravi, Olivier Elemento, Mehdi Totonchi, and Iman Hajirasouliha. Breast cancer histopathological image classification: A deep learning approach. BioRxiv, page 242818, 2018.
  • Talo [2019] Muhammed Talo. Automated classification of histopathology images using transfer learning. Artificial Intelligence in Medicine, 101:101743, 2019.
  • George et al. [2020] Kalpana George, Shameer Faziludeen, Praveen Sankaran, et al. Breast cancer detection from biopsy images using nucleus guided transfer learning and belief based fusion. Computers in Biology and Medicine, 124:103954, 2020.
  • Rodriguez-Ruiz et al. [2019] Alejandro Rodriguez-Ruiz, Kristina Lång, Albert Gubern-Merida, Mireille Broeders, Gisella Gennaro, Paola Clauser, Thomas H Helbich, Margarita Chevalier, Tao Tan, Thomas Mertelmeier, et al. Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists. JNCI: Journal of the National Cancer Institute, 111(9):916–922, 2019.
  • Samuel [1959] Arthur L Samuel. Some studies in machine learning using the game of checkers. IBM Journal of research and development, 3(3):210–229, 1959.
  • LeCun et al. [2015] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.
  • Burt et al. [2018] Jeremy R Burt, Neslisah Torosdagli, Naji Khosravan, Harish RaviPrakash, Aliasghar Mortazi, Fiona Tissavirasingham, Sarfaraz Hussein, and Ulas Bagci. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks. The British journal of radiology, 91(1089):20170545, 2018.
  • Sharma and Mehra [2020] Shallu Sharma and Rajesh Mehra. Conventional machine learning and deep learning approach for multi-classification of breast cancer histopathology images—a comparative insight. Journal of digital imaging, 33(3):632–654, 2020.
  • Sree et al. [2011] Subbhuraam Vinitha Sree, Eddie Yin-Kwee Ng, Rajendra U Acharya, and Oliver Faust. Breast imaging: a survey. World journal of clinical oncology, 2(4):171, 2011.
  • Lång et al. [2021] Kristina Lång, Magnus Dustler, Victor Dahlblom, Anna Åkesson, Ingvar Andersson, and Sophia Zackrisson. Identifying normal mammograms in a large screening population using artificial intelligence. European Radiology, 31(3):1687–1692, 2021.
  • Arevalo et al. [2015] John Arevalo, Fabio A González, Raúl Ramos-Pollán, Jose L Oliveira, and Miguel Angel Guevara Lopez. Convolutional neural networks for mammography mass lesion classification. In 2015 37th Annual international conference of the IEEE engineering in medicine and biology society (EMBC), pages 797–800. IEEE, 2015.
  • Duraisamy and Emperumal [2017] Saraswathi Duraisamy and Srinivasan Emperumal. Computer-aided mammogram diagnosis system using deep learning convolutional fully complex-valued relaxation neural network classifier. IET Computer Vision, 11(8):656–662, 2017.
  • Khan [2017] Maleika Heenaye-Mamode Khan. Automated breast cancer diagnosis using artificial neural network (ann). In 2017 3rd Iranian Conference on Intelligent Systems and Signal Processing (ICSPIS), pages 54–58. IEEE, 2017.
  • Hadad et al. [2017] Omer Hadad, Ran Bakalo, Rami Ben-Ari, Sharbell Hashoul, and Guy Amit. Classification of breast lesions using cross-modal deep learning. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages 109–112. IEEE, 2017.
  • Basile et al. [2019] TMA Basile, A Fanizzi, L Losurdo, R Bellotti, U Bottigli, R Dentamaro, V Didonna, A Fausto, R Massafra, M Moschetta, et al. Microcalcification detection in full-field digital mammograms: A fully automated computer-aided system. Physica Medica, 64:1–9, 2019.
  • Kim et al. [2016] Dae Hoe Kim, Seong Tae Kim, and Yong Man Ro. Latent feature representation with 3-d multi-view deep convolutional neural network for bilateral analysis in digital breast tomosynthesis. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 927–931. IEEE, 2016.
  • Comstock et al. [2020] Christopher E Comstock, Constantine Gatsonis, Gillian M Newstead, Bradley S Snyder, Ilana F Gareen, Jennifer T Bergin, Habib Rahbar, Janice S Sung, Christina Jacobs, Jennifer A Harvey, et al. Comparison of abbreviated breast mri vs digital breast tomosynthesis for breast cancer detection among women with dense breasts undergoing screening. Jama, 323(8):746–756, 2020.
  • Vijayarajeswari et al. [2019] R Vijayarajeswari, P Parthasarathy, S Vivekanandan, and A Alavudeen Basha. Classification of mammogram for early detection of breast cancer using svm classifier and hough transform. Measurement, 146:800–805, 2019.
  • Chen et al. [2017a] Tony Hsiu-Hsi Chen, Amy Ming-Fang Yen, Jean Ching-Yuan Fann, Paula Gordon, Sam Li-Sheng Chen, Sherry Yueh-Hsia Chiu, Chen-Yang Hsu, King-Jen Chang, Won-Chul Lee, Khay Guan Yeoh, et al. Clarifying the debate on population-based screening for breast cancer with mammography: a systematic review of randomized controlled trials on mammography with bayesian meta-analysis and causal model. Medicine, 96(3), 2017a.
  • da Costa Vieira et al. [2017] René Aloísio da Costa Vieira, Gabriele Biller, Gilberto Uemura, Carlos Alberto Ruiz, and Maria Paula Curado. Breast cancer screening in developing countries. Clinics, 72(4):244–253, 2017.
  • Yip et al. [2018] CH Yip, NA Taib, CV Song, RK Pritam Singh, and G Agarwal. Early diagnosis of breast cancer in the absence of population-based mammographic screening in asia. Current Breast Cancer Reports, 10(3):148–156, 2018.
  • Cho et al. [2017] Nariya Cho, Wonshik Han, Boo-Kyung Han, Min Sun Bae, Eun Sook Ko, Seok Jin Nam, Eun Young Chae, Jong Won Lee, Sung Hun Kim, Bong Joo Kang, et al. Breast cancer screening with mammography plus ultrasonography or magnetic resonance imaging in women 50 years or younger at diagnosis and treated with breast conservation therapy. JAMA oncology, 3(11):1495–1502, 2017.
  • Fiorica [2016] James V Fiorica. Breast cancer screening, mammography, and other modalities. Clinical obstetrics and gynecology, 59(4):688–709, 2016.
  • Jesneck et al. [2007] Jonathan L Jesneck, Joseph Y Lo, and Jay A Baker. Breast mass lesions: computer-aided diagnosis models with mammographic and sonographic descriptors. Radiology, 244(2):390–398, 2007.
  • Cheng et al. [2010] Heng-Da Cheng, Juan Shan, Wen Ju, Yanhui Guo, and Ling Zhang. Automated breast cancer detection and classification using ultrasound images: A survey. Pattern recognition, 43(1):299–317, 2010.
  • Maxim et al. [2014] L Daniel Maxim, Ron Niebo, and Mark J Utell. Screening tests: a review with examples. Inhalation toxicology, 26(13):811–828, 2014.
  • Zhi et al. [2007] Hui Zhi, Bing Ou, Bao-Ming Luo, Xia Feng, Yan-Ling Wen, and Hai-Yun Yang. Comparison of ultrasound elastography, mammography, and sonography in the diagnosis of solid breast lesions. Journal of ultrasound in medicine, 26(6):807–815, 2007.
  • Han et al. [2019] Jing Han, Fei Li, Chuan Peng, Yini Huang, Qingguang Lin, Yubo Liu, Longhui Cao, and Jianhua Zhou. Reducing unnecessary biopsy of breast lesions: Preliminary results with combination of strain and shear-wave elastography. Ultrasound in medicine & biology, 45(9):2317–2327, 2019.
  • Youk et al. [2017] Ji Hyun Youk, Hye Mi Gweon, and Eun Ju Son. Shear-wave elastography in breast ultrasonography: the state of the art. Ultrasonography, 36(4):300, 2017.
  • Tsui et al. [2008] Po-Hsiang Tsui, Chih-Kuang Yeh, Chien-Cheng Chang, and Yin-Yin Liao. Classification of breast masses by ultrasonic nakagami imaging: a feasibility study. Physics in Medicine & Biology, 53(21):6027, 2008.
  • Moustafa et al. [2020] Afaf F Moustafa, Theodore W Cary, Laith R Sultan, Susan M Schultz, Emily F Conant, Santosh S Venkatesh, and Chandra M Sehgal. Color doppler ultrasound improves machine learning diagnosis of breast cancer. Diagnostics, 10(9):631, 2020.
  • Lei et al. [2021] Yang Lei, Xiuxiu He, Jincao Yao, Tonghe Wang, Lijing Wang, Wei Li, Walter J Curran, Tian Liu, Dong Xu, and Xiaofeng Yang. Breast tumor segmentation in 3d automatic breast ultrasound using mask scoring r-cnn. Medical Physics, 48(1):204–214, 2021.
  • Brem et al. [2015] Rachel F Brem, Megan J Lenihan, Jennifer Lieberman, and Jessica Torrente. Screening breast ultrasound: past, present, and future. American Journal of Roentgenology, 204(2):234–240, 2015.
  • Thigpen et al. [2018] Denise Thigpen, Amanda Kappler, and Rachel Brem. The role of ultrasound in screening dense breasts—a review of the literature and practical solutions for implementation. Diagnostics, 8(1):20, 2018.
  • Stavros et al. [1995] A Thomas Stavros, David Thickman, Cynthia L Rapp, Mark A Dennis, Steve H Parker, and Gale A Sisney. Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiology, 196(1):123–134, 1995.
  • Yap et al. [2017] Moi Hoon Yap, Gerard Pons, Joan Marti, Sergi Ganau, Melcior Sentis, Reyer Zwiggelaar, Adrian K Davison, and Robert Marti. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE journal of biomedical and health informatics, 22(4):1218–1226, 2017.
  • Teh and Wilson [1998] W Teh and ARM Wilson. The role of ultrasound in breast cancer screening. a consensus statement by the european group for breast cancer screening. European journal of cancer, 34(4):449–450, 1998.
  • Kelly et al. [2010] Kevin M Kelly, Judy Dean, W Scott Comulada, and Sung-Jae Lee. Breast cancer detection using automated whole breast ultrasound and mammography in radiographically dense breasts. European radiology, 20(3):734–742, 2010.
  • Sardanelli et al. [2004] Francesco Sardanelli, Gian M Giuseppetti, Pietro Panizza, Massimo Bazzocchi, Alfonso Fausto, Giovanni Simonetti, Vincenzo Lattanzio, and Alessandro Del Maschio. Sensitivity of mri versus mammography for detecting foci of multifocal, multicentric breast cancer in fatty and dense breasts using the whole-breast pathologic examination as a gold standard. American Journal of Roentgenology, 183(4):1149–1157, 2004.
  • Morris [2002] Elizabeth A Morris. Breast cancer imaging with mri. Radiologic Clinics, 40(3):443–466, 2002.
  • Sheth and Giger [2020] Deepa Sheth and Maryellen L Giger. Artificial intelligence in the interpretation of breast cancer on mri. Journal of Magnetic Resonance Imaging, 51(5):1310–1324, 2020.
  • Mann et al. [2015] Ritse M Mann, Corinne Balleyguier, Pascal A Baltzer, Ulrich Bick, Catherine Colin, Eleanor Cornford, Andrew Evans, Eva Fallenberg, Gabor Forrai, Michael H Fuchsjäger, et al. Breast mri: Eusobi recommendations for women’s information. European radiology, 25(12):3669–3678, 2015.
  • Rasti et al. [2017] Reza Rasti, Mohammad Teshnehlab, and Son Lam Phung. Breast cancer diagnosis in dce-mri using mixture ensemble of convolutional neural networks. Pattern Recognition, 72:381–390, 2017.
  • Mann et al. [2008] Ritse M Mann, Christiane K Kuhl, Karen Kinkel, and Carla Boetes. Breast mri: guidelines from the european society of breast imaging. European radiology, 18(7):1307–1318, 2008.
  • Houssami and Cho [2018] Nehmat Houssami and Nariya Cho. Screening women with a personal history of breast cancer: overview of the evidence on breast imaging surveillance. Ultrasonography, 37(4):277, 2018.
  • Greenwood [2019] Heather I Greenwood. Abbreviated protocol breast mri: the past, present, and future. Clinical imaging, 53:169–173, 2019.
  • van Zelst et al. [2018] Jan CM van Zelst, Suzan Vreemann, Hans-Joerg Witt, Albert Gubern-Merida, Monique D Dorrius, Katya Duvivier, Susanne Lardenoije-Broker, Marc BI Lobbes, Claudette Loo, Wouter Veldhuis, et al. Multireader study on the diagnostic accuracy of ultrafast breast magnetic resonance imaging for breast cancer screening. Investigative radiology, 53(10):579–586, 2018.
  • Heller and Moy [2019] Samantha L Heller and Linda Moy. Mri breast screening revisited. Journal of Magnetic Resonance Imaging, 49(5):1212–1221, 2019.
  • Aswathy and Jagannath [2017] MA Aswathy and M Jagannath. Detection of breast cancer on digital histopathology images: Present status and future possibilities. Informatics in Medicine Unlocked, 8:74–79, 2017.
  • Tellez et al. [2018] David Tellez, Maschenka Balkenhol, Nico Karssemeijer, Geert Litjens, Jeroen van der Laak, and Francesco Ciompi. H and e stain augmentation improves generalization of convolutional networks for histopathological mitosis detection. In Medical Imaging 2018: Digital Pathology, volume 10581, page 105810Z. International Society for Optics and Photonics, 2018.
  • Veta et al. [2014] Mitko Veta, Josien PW Pluim, Paul J Van Diest, and Max A Viergever. Breast cancer histopathology image analysis: A review. IEEE transactions on biomedical engineering, 61(5):1400–1411, 2014.
  • Nahid et al. [2017] Abdullah-Al Nahid, Ferdous Bin Ali, and Yinan Kong. Histopathological breast-image classification with image enhancement by convolutional neural network. In 2017 20th International Conference of Computer and Information Technology (ICCIT), pages 1–6. IEEE, 2017.
  • Araújo et al. [2017] Teresa Araújo, Guilherme Aresta, Eduardo Castro, José Rouco, Paulo Aguiar, Catarina Eloy, António Polónia, and Aurélio Campilho. Classification of breast cancer histology images using convolutional neural networks. PloS one, 12(6):e0177544, 2017.
  • Bardou et al. [2018] Dalal Bardou, Kun Zhang, and Sayed Mohammad Ahmad. Classification of breast cancer based on histology images using convolutional neural networks. Ieee Access, 6:24680–24693, 2018.
  • Jaglan et al. [2019] Poonam Jaglan, Rajeshwar Dass, and Manoj Duhan. Breast cancer detection techniques: issues and challenges. Journal of The Institution of Engineers (India): Series B, 100(4):379–386, 2019.
  • Posso et al. [2017] Margarita Posso, Teresa Puig, Misericòrdia Carles, Montserrat Rué, Carlos Canelo-Aybar, and Xavier Bonfill. Effectiveness and cost-effectiveness of double reading in digital mammography screening: a systematic review and meta-analysis. European journal of radiology, 96:40–49, 2017.
  • Wilkinson et al. [2017] Louise Wilkinson, Val Thomas, and Nisha Sharma. Microcalcification on mammography: approaches to interpretation and biopsy. The British journal of radiology, 90(1069):20160594, 2017.
  • Pisano et al. [2005] Etta D Pisano, Constantine Gatsonis, Edward Hendrick, Martin Yaffe, Janet K Baum, Suddhasatta Acharyya, Emily F Conant, Laurie L Fajardo, Lawrence Bassett, Carl D’Orsi, et al. Diagnostic performance of digital versus film mammography for breast-cancer screening. New England Journal of Medicine, 353(17):1773–1783, 2005.
  • Zhao et al. [2015] Hong Zhao, Liwei Zou, Xiaoping Geng, and Suisheng Zheng. Limitations of mammography in the diagnosis of breast diseases compared with ultrasonography: a single-center retrospective analysis of 274 cases. European journal of medical research, 20(1):1–7, 2015.
  • Rapelyea and Marks [2018] Jocelyn A Rapelyea and Christina G Marks. Breast ultrasound past, present, and future. Breast Imaging. Croatia: Intech open, pages 21–48, 2018.
  • Sood et al. [2019] Rupali Sood, Anne F Rositch, Delaram Shakoor, Emily Ambinder, Kara-Lee Pool, Erica Pollack, Daniel J Mollura, Lisa A Mullen, and Susan C Harvey. Ultrasound for breast cancer detection globally: a systematic review and meta-analysis. Journal of global oncology, 5:1–17, 2019.
  • [84] Radiological Society of North America. Ultrasound images. URL
  • Hodler et al. [2019] Juerg Hodler, Rahel A Kubik-Huch, and Gustav K von Schulthess. Diseases of the chest, breast, heart and vessels 2019-2022: Diagnostic and interventional imaging. 2019.
  • Reig et al. [2020] Beatriu Reig, Laura Heacock, Krzysztof J Geras, and Linda Moy. Machine learning in breast mri. Journal of Magnetic Resonance Imaging, 52(4):998–1018, 2020.
  • Kalantarova et al. [2021] Anastasia Kalantarova, Nicole Josephine Zembol, and Joanna Kufel-Grabowska. Pregnancy-associated breast cancer as a screening and diagnostic challenge: a case report. Nowotwory. Journal of Oncology, 71(3):162–164, 2021.
  • García et al. [2018] Eloy García, Yago Diez, Oliver Diaz, Xavier Lladó, Robert Martí, Joan Martí, and Arnau Oliver. A step-by-step review on patient-specific biomechanical finite element models for breast mri to x-ray mammography registration. Medical physics, 45(1):e6–e31, 2018.
  • Kumar et al. [2020] Abhinav Kumar, Sanjay Kumar Singh, Sonal Saxena, K Lakshmanan, Arun Kumar Sangaiah, Himanshu Chauhan, Sameer Shrivastava, and Raj Kumar Singh. Deep feature learning for histopathological image classification of canine mammary tumors and human breast cancer. Information Sciences, 508:405–421, 2020.
  • Yang et al. [2019] Heechan Yang, Ji-Ye Kim, Hyongsuk Kim, and Shyam P Adhikari. Guided soft attention network for classification of breast cancer histopathology images. IEEE transactions on medical imaging, 39(5):1306–1315, 2019.
  • Prevedello et al. [2019] Luciano M Prevedello, Safwan S Halabi, George Shih, Carol C Wu, Marc D Kohli, Falgun H Chokshi, Bradley J Erickson, Jayashree Kalpathy-Cramer, Katherine P Andriole, and Adam E Flanders. Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiology: Artificial Intelligence, 1(1):e180031, 2019.
  • Nazir and Khan [2021] Anjum Nazir and Rizwan Ahmed Khan.

    A novel combinatorial optimization based feature selection method for network intrusion detection.

    Computers & Security, 102:102164, 2021.
  • Crenn et al. [2020] Arthur Crenn, Alexandre Meyer, Hubert Konik, Rizwan Ahmed Khan, and Saida Bouakaz. Generic body expression recognition based on synthesis of realistic neutral motion. IEEE Access, 8:207758–207767, 2020. doi: 10.1109/ACCESS.2020.3038473.
  • Memon et al. [2020] Jamshed Memon, Maira Sami, Rizwan Ahmed Khan, and Mueen Uddin. Handwritten optical character recognition (OCR): A comprehensive systematic literature review (SLR). IEEE Access, 8:142642–142668, 2020. doi: 10.1109/ACCESS.2020.3012542.
  • Khan et al. [2013] Rizwan Ahmed Khan, Alexandre Meyer, Hubert Konik, and Saïda Bouakaz. Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recognition Letters, 34(10):1159–1168, 2013. ISSN 0167-8655. doi: URL
  • Jaliaawala and Khan [2020] Muhammad Shoaib Jaliaawala and Rizwan Ahmed Khan. Can autism be catered with artificial intelligence-assisted intervention technology? a comprehensive survey. Artificial Intelligence Review, 2020.
  • Yu et al. [2018] Kun-Hsing Yu, Andrew L Beam, and Isaac S Kohane. Artificial intelligence in healthcare. Nature biomedical engineering, 2(10):719–731, 2018.
  • Giger [2018] Maryellen L Giger. Machine learning in medical imaging. Journal of the American College of Radiology, 15(3):512–520, 2018.
  • Panayides et al. [2020] Andreas S Panayides, Amir Amini, Nenad D Filipovic, Ashish Sharma, Sotirios A Tsaftaris, Alistair Young, David Foran, Nhan Do, Spyretta Golemati, Tahsin Kurc, et al. Ai in medical imaging informatics: Current challenges and future directions. IEEE Journal of Biomedical and Health Informatics, 24(7):1837–1857, 2020.
  • Shah and Khan [2020] Shahid Munir Shah and Rizwan Ahmed Khan. Secondary use of electronic health record: Opportunities and challenges. IEEE Access, 8:136947–136965, 2020.
  • McDonald et al. [2015] Robert J McDonald, Kara M Schwartz, Laurence J Eckel, Felix E Diehn, Christopher H Hunt, Brian J Bartholmai, Bradley J Erickson, and David F Kallmes. The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Academic radiology, 22(9):1191–1198, 2015.
  • Fitzgerald [2001] Richard Fitzgerald. Error in radiology. Clinical radiology, 56(12):938–946, 2001.
  • Lee et al. [2017] June-Goo Lee, Sanghoon Jun, Young-Won Cho, Hyunna Lee, Guk Bae Kim, Joon Beom Seo, and Namkug Kim. Deep learning in medical imaging: general overview. Korean journal of radiology, 18(4):570, 2017.
  • Erickson et al. [2017] Bradley J Erickson, Panagiotis Korfiatis, Zeynettin Akkus, and Timothy L Kline. Machine learning for medical imaging. Radiographics, 37(2):505–515, 2017.
  • Müller and Guido [2016] Andreas C Müller and Sarah Guido. Introduction to machine learning with Python: a guide for data scientists. ” O’Reilly Media, Inc.”, 2016.
  • Tang [2019] Xiaoli Tang. The role of artificial intelligence in medical imaging research. BJR— Open, 2(1):20190031, 2019.
  • Clark et al. [2013] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al. The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging, 26(6):1045–1057, 2013.
  • Bazazeh and Shubair [2016] Dana Bazazeh and Raed Shubair. Comparative study of machine learning algorithms for breast cancer detection and diagnosis. In 2016 5th international conference on electronic devices, systems and applications (ICEDSA), pages 1–4. IEEE, 2016.
  • Nazir and Khan [2020] Anjum Nazir and Rizwan Ahmed Khan. Network Intrusion Detection: Taxonomy and Machine Learning Applications. 2020.
  • Yassin et al. [2018] Nisreen IR Yassin, Shaimaa Omran, Enas MF El Houby, and Hemat Allam. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Computer methods and programs in biomedicine, 156:25–45, 2018.
  • Agarap [2018] Abien Fred M Agarap. On breast cancer detection: an application of machine learning algorithms on the wisconsin diagnostic dataset. In Proceedings of the 2nd international conference on machine learning and soft computing, pages 5–9, 2018.
  • Sharma et al. [2017] Ayush Sharma, Sudhanshu Kulshrestha, and Sibi Daniel. Machine learning approaches for breast cancer diagnosis and prognosis. In 2017 International conference on soft computing and its engineering applications (icSoftComp), pages 1–5. IEEE, 2017.
  • Azar and El-Metwally [2013] Ahmad Taher Azar and Shereen M El-Metwally. Decision tree classifiers for automated medical diagnosis. Neural Computing and Applications, 23(7):2387–2403, 2013.
  • Ribeiro et al. [2015] Patricia B Ribeiro, Leandro A Passos, Luis A Da Silva, Kelton AP da Costa, Joao P Papa, and Roseli AF Romero. Unsupervised breast masses classification through optimum-path forest. In 2015 IEEE 28th International Symposium on Computer-Based Medical Systems, pages 238–243. IEEE, 2015.
  • Jian et al. [2012] Wushuai Jian, Xueyan Sun, and Shuqian Luo. Computer-aided diagnosis of breast microcalcifications based on dual-tree complex wavelet transform. Biomedical engineering online, 11(1):1–12, 2012.
  • Kowal et al. [2013] Marek Kowal, Pawel Filipczuk, Andrzej Obuchowicz, Jozef Korbicz, and Roman Monczak. Computer-aided diagnosis of breast cancer based on fine needle biopsy microscopic images. Computers in biology and medicine, 43(10):1563–1572, 2013.
  • Raghavendra et al. [2016] U Raghavendra, U Rajendra Acharya, Hamido Fujita, Anjan Gudigar, Jen Hong Tan, and Shreesha Chokkadi. Application of gabor wavelet and locality sensitive discriminant analysis for automated identification of breast cancer using digitized mammogram images. Applied Soft Computing, 46:151–161, 2016.
  • Li [2012] Jun-Bao Li. Mammographic image based breast tissue classification with kernel self-optimized fisher discriminant for breast cancer diagnosis. Journal of medical systems, 36(4):2235–2244, 2012.
  • Lo et al. [2015] Chung-Ming Lo, Yi-Chen Lai, Yi-Hong Chou, and Ruey-Feng Chang. Quantitative breast lesion classification based on multichannel distributions in shear-wave imaging. Computer methods and programs in biomedicine, 122(3):354–361, 2015.
  • Sharif and Khan [arXiv:1903.11323, 2020] Hamza Sharif and Rizwan Ahmed Khan. A novel machine learning based framework for detection of autism spectrum disorder (ASD). arXiv:1903.11323, 2020.
  • Sigirci et al. [2021] I Onur Sigirci, Abdulkadir Albayrak, and Gokhan Bilgin. Detection of mitotic cells in breast cancer histopathological images using deep versus handcrafted features. Multimedia Tools and Applications, pages 1–24, 2021.
  • Chan et al. [2015] Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng, and Yi Ma. Pcanet: A simple deep learning baseline for image classification? IEEE transactions on image processing, 24(12):5017–5032, 2015.
  • Chen et al. [2014] Yushi Chen, Zhouhan Lin, Xing Zhao, Gang Wang, and Yanfeng Gu. Deep learning-based classification of hyperspectral data. IEEE Journal of Selected topics in applied earth observations and remote sensing, 7(6):2094–2107, 2014.
  • Chen et al. [2017b] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017b.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • Ren et al. [2015] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28:91–99, 2015.
  • Cheng et al. [2016] Jie-Zhi Cheng, Dong Ni, Yi-Hong Chou, Jing Qin, Chui-Mei Tiu, Yeun-Chung Chang, Chiun-Sheng Huang, Dinggang Shen, and Chung-Ming Chen. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in us images and pulmonary nodules in ct scans. Scientific reports, 6(1):1–13, 2016.
  • Litjens et al. [2016] Geert Litjens, Clara I Sánchez, Nadya Timofeeva, Meyke Hermsen, Iris Nagtegaal, Iringo Kovacs, Christina Hulsbergen-Van De Kaa, Peter Bult, Bram Van Ginneken, and Jeroen Van Der Laak. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific reports, 6(1):1–11, 2016.
  • Todoroki et al. [2017] Yoshihiro Todoroki, Xian-Hua Han, Yutaro Iwamoto, Lanfen Lin, Hongjie Hu, and Yen-Wei Chen. Detection of liver tumor candidates from ct images using deep convolutional neural networks. In International Conference on Innovation in Medicine and Healthcare, pages 140–145. Springer, 2017.
  • King Jr [2017] Bernard F King Jr. Guest editorial: discovery and artificial intelligence, 2017.
  • Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
  • Hecht-Nielsen [1989] Hecht-Nielsen.

    Theory of the backpropagation neural network.

    In International 1989 Joint Conference on Neural Networks, pages 593–605 vol.1, 1989. doi: 10.1109/IJCNN.1989.118638.
  • Bebis and Georgiopoulos [1994] George Bebis and Michael Georgiopoulos. Feed-forward neural networks. IEEE Potentials, 13(4):27–31, 1994.
  • Svozil et al. [1997] Daniel Svozil, Vladimir Kvasnicka, and Jiri Pospichal. Introduction to multi-layer feed-forward neural networks. Chemometrics and intelligent laboratory systems, 39(1):43–62, 1997.
  • Bakator and Radosav [2018] Mihalj Bakator and Dragica Radosav. Deep learning and medical diagnosis: A review of literature. Multimodal Technologies and Interaction, 2(3):47, 2018.
  • Bengio [2007] Yoshua Bengio. Learning deep architectures for al. Foundations and Trends in Machine Learning.-2009.—2 (1).-pp, pages 1–127, 2007.
  • Arefan et al. [2015] D Arefan, A Talebpour, N Ahmadinejhad, and A Kamali Asl. Automatic breast density classification using neural network. Journal of Instrumentation, 10(12):T12002, 2015.
  • Fischer and Igel [2012] Asja Fischer and Christian Igel.

    An introduction to restricted boltzmann machines.

    In Iberoamerican congress on pattern recognition, pages 14–36. Springer, 2012.
  • Zhang et al. [2016] Qi Zhang, Yang Xiao, Wei Dai, Jingfeng Suo, Congzhi Wang, Jun Shi, and Hairong Zheng. Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics, 72:150–157, 2016.
  • Wu et al. [2016] Jinjie Wu, Jun Shi, Yan Li, Jingfeng Suo, and Qi Zhang. Histopathological image classification using random binary hashing based pcanet and bilinear classifier. In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2050–2054. IEEE, 2016.
  • Fukushima and Miyake [1982] Kunihiko Fukushima and Sei Miyake. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in neural nets, pages 267–285. Springer, 1982.
  • Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
  • Wan et al. [2017] Tao Wan, Jiajia Cao, Jianhui Chen, and Zengchang Qin. Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features. Neurocomputing, 229:34–44, 2017. ISSN 0925-2312. doi: URL Advances in computing techniques for big medical image data.
  • Han et al. [2017] Zhongyi Han, Benzheng Wei, Yuanjie Zheng, Yilong Yin, Kejian Li, and Shuo Li. Breast cancer multi-classification from histopathological images with structured deep learning model. Nature Scientific Reports, 7(1):4172, Jun 2017. ISSN 2045-2322. doi: 10.1038/s41598-017-04075-z. URL
  • Yari et al. [2020] Yasin Yari, Thuy V Nguyen, and Hieu T Nguyen. Deep learning applied for histological diagnosis of breast cancer. IEEE Access, 8:162432–162448, 2020.
  • Khan et al. [2019] Rizwan Ahmed Khan, Arthur Crenn, Alexandre Meyer, and Saida Bouakaz. A novel database of children’s spontaneous facial expressions (liris-cse). Image and Vision Computing, 83-84:61–69, 2019. ISSN 0262-8856. doi: URL
  • Liu and Yao [1999] Yong Liu and Xin Yao. Ensemble learning via negative correlation. Neural networks, 12(10):1399–1404, 1999.
  • Wang et al. [2019] Zhiqiong Wang, Mo Li, Huaxia Wang, Hanyu Jiang, Yudong Yao, Hao Zhang, and Junchang Xin. Breast cancer detection using extreme learning machine based on feature fusion with cnn deep features. IEEE Access, 7:105146–105158, 2019.
  • Chollet [2017] François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
  • Szegedy et al. [2015] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
  • Szegedy et al. [2017] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander Alemi.

    Inception-v4, inception-resnet and the impact of residual connections on learning.

    In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
  • Shin et al. [2016] Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5):1285–1298, 2016.
  • Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • Szegedy et al. [2016] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
  • Fonseca et al. [2015] Pablo Fonseca, Julio Mendoza, Jacques Wainer, Jose Ferrer, Joseph Pinto, Jorge Guerrero, and Benjamin Castaneda. Automatic breast density classification using a convolutional neural network architecture search procedure. In Medical Imaging 2015: Computer-Aided Diagnosis, volume 9414, page 941428. International Society for Optics and Photonics, 2015.
  • Carneiro et al. [2015] Gustavo Carneiro, Jacinto Nascimento, and Andrew P Bradley. Unregistered multiview mammogram analysis with pre-trained deep learning models. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 652–660. Springer, 2015.
  • Jiao et al. [2016] Zhicheng Jiao, Xinbo Gao, Ying Wang, and Jie Li. A deep feature based framework for breast masses classification. Neurocomputing, 197:221–231, 2016.
  • Huynh et al. [2016] Benjamin Q Huynh, Hui Li, and Maryellen L Giger. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. Journal of Medical Imaging, 3(3):034501, 2016.
  • Suzuki et al. [2016] Shintaro Suzuki, Xiaoyong Zhang, Noriyasu Homma, Kei Ichiji, Norihiro Sugita, Yusuke Kawasumi, Tadashi Ishibashi, and Makoto Yoshizawa. Mass detection using deep convolutional neural network for mammographic computer-aided diagnosis. In 2016 55th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pages 1382–1386. IEEE, 2016.
  • Samala et al. [2016] Ravi K Samala, Heang-Ping Chan, Lubomir Hadjiiski, Mark A Helvie, Jun Wei, and Kenny Cha. Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography. Medical physics, 43(12):6654–6666, 2016.
  • Arevalo et al. [2016] John Arevalo, Fabio A González, Raúl Ramos-Pollán, Jose L Oliveira, and Miguel Angel Guevara Lopez. Representation learning for mammography mass lesion classification with convolutional neural networks. Computer methods and programs in biomedicine, 127:248–257, 2016.
  • Samala et al. [2017] Ravi K Samala, Heang-Ping Chan, Lubomir M Hadjiiski, Mark A Helvie, Kenny H Cha, and Caleb D Richter. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms. Physics in Medicine & Biology, 62(23):8894, 2017.
  • Kumar et al. [2017] Indrajeet Kumar, HS Bhadauria, Jitendra Virmani, and Shruti Thakur. A classification framework for prediction of breast density using an ensemble of neural network classifiers. Biocybernetics and Biomedical Engineering, 37(1):217–228, 2017.
  • Kooi et al. [2017] Thijs Kooi, Geert Litjens, Bram Van Ginneken, Albert Gubern-Mérida, Clara I Sánchez, Ritse Mann, Ard den Heeten, and Nico Karssemeijer. Large scale deep learning for computer aided detection of mammographic lesions. Medical image analysis, 35:303–312, 2017.
  • Li et al. [2017] Hui Li, Maryellen L Giger, Benjamin Q Huynh, and Natasha O Antropova. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms. Journal of medical imaging, 4(4):041304, 2017.
  • Ahn et al. [2017] Chul Kyun Ahn, Changyong Heo, Heongmin Jin, and Jong Hyo Kim.

    A novel deep learning-based approach to high accuracy breast density estimation in digital mammography.

    In Medical Imaging 2017: Computer-Aided Diagnosis, volume 10134, page 101342O. International Society for Optics and Photonics, 2017.
  • Dhungel et al. [2017] Neeraj Dhungel, Gustavo Carneiro, and Andrew P Bradley. A deep learning approach for the analysis of masses in mammograms with minimal user intervention. Medical image analysis, 37:114–128, 2017.
  • Al-Antari et al. [2018] Mugahed A Al-Antari, Mohammed A Al-Masni, Mun-Taek Choi, Seung-Moo Han, and Tae-Seong Kim. A fully integrated computer-aided diagnosis system for digital x-ray mammograms via deep learning detection, segmentation, and classification. International journal of medical informatics, 117:44–54, 2018.
  • Wu et al. [2018] Nan Wu, Krzysztof J Geras, Yiqiu Shen, Jingyi Su, S Gene Kim, Eric Kim, Stacey Wolfson, Linda Moy, and Kyunghyun Cho. Breast density classification with deep convolutional neural networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6682–6686. IEEE, 2018.
  • Xu et al. [2018] Jingxu Xu, Cheng Li, Yongjin Zhou, Lisha Mou, Hairong Zheng, and Shanshan Wang. Classifying mammographic breast density by residual learning. arXiv preprint arXiv:1809.10241, 2018.
  • Zhu et al. [2018] Wentao Zhu, Xiang Xiang, Trac D Tran, Gregory D Hager, and Xiaohui Xie. Adversarial deep structured nets for mass segmentation from mammograms. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pages 847–850. IEEE, 2018.
  • Yu et al. [2019] ShaoDe Yu, LingLing Liu, ZhaoYang Wang, GuangZhe Dai, and YaoQin Xie. Transferring deep neural networks for the differentiation of mammographic breast lesions. Science China Technological Sciences, 62(3):441–447, 2019.
  • Ribli et al. [2018] Dezső Ribli, Anna Horváth, Zsuzsa Unger, Péter Pollner, and István Csabai. Detecting and classifying lesions in mammograms with deep learning. Scientific reports, 8(1):1–7, 2018.
  • Al-antari et al. [2018] Mugahed A Al-antari, Mohammed A Al-masni, Sung-Un Park, JunHyeok Park, Mohamed K Metwally, Yasser M Kadah, Seung-Moo Han, and Tae-Seong Kim. An automatic computer-aided diagnosis system for breast cancer in digital mammograms via deep belief network. Journal of Medical and Biological Engineering, 38(3):443–456, 2018.
  • Chougrad et al. [2018] Hiba Chougrad, Hamid Zouaki, and Omar Alheyane. Deep convolutional neural networks for breast cancer screening. Computer methods and programs in biomedicine, 157:19–30, 2018.
  • Ting et al. [2019] Fung Fung Ting, Yen Jun Tan, and Kok Swee Sim. Convolutional neural network improvement for breast cancer classification. Expert Systems with Applications, 120:103–115, 2019.
  • Mendel et al. [2019] Kayla Mendel, Hui Li, Deepa Sheth, and Maryellen Giger. Transfer learning from convolutional neural networks for computer-aided diagnosis: a comparison of digital breast tomosynthesis and full-field digital mammography. Academic radiology, 26(6):735–743, 2019.
  • Ragab et al. [2019] Dina A Ragab, Maha Sharkas, Stephen Marshall, and Jinchang Ren. Breast cancer detection using deep convolutional neural networks and support vector machines. PeerJ, 7:e6201, 2019.
  • Ionescu et al. [2019] Georgia V Ionescu, Martin Fergie, Michael Berks, Elaine F Harkness, Johan Hulleman, Adam R Brentnall, Jack Cuzick, D Gareth Evans, and Susan M Astley. Prediction of reader estimates of mammographic density using convolutional neural networks. Journal of Medical Imaging, 6(3):031405, 2019.
  • Singh et al. [2020] Vivek Kumar Singh, Hatem A Rashwan, Santiago Romani, Farhan Akram, Nidhi Pandey, Md Mostafa Kamal Sarker, Adel Saleh, Meritxell Arenas, Miguel Arquez, Domenec Puig, et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Systems with Applications, 139:112855, 2020.
  • Junior et al. [2020] Osmando Pereira Junior, Helder Cesar Rodrigues Oliveira, Carolina Toledo Ferraz, José Hiroki Saito, Marcelo Andrade da Costa Vieira, and Adilson Gonzaga. A novel fusion-based texture descriptor to improve the detection of architectural distortion in digital mammography. Journal of Digital Imaging, pages 1–17, 2020.
  • Beeravolu et al. [2021] Abhijith Reddy Beeravolu, Sami Azam, Mirjam Jonkman, Bharanidharan Shanmugam, Krishnan Kannoorpatti, and Adnan Anwar. Preprocessing of breast cancer images to create datasets for deep-cnn. IEEE Access, 2021.
  • Altaf [2021] Meteb M Altaf. A hybrid deep learning model for breast cancer diagnosis based on transfer learning and pulse-coupled neural networks. Mathematical Biosciences and Engineering, 18(5):5029–5046, 2021.
  • Khamparia et al. [2021] Aditya Khamparia, Subrato Bharati, Prajoy Podder, Deepak Gupta, Ashish Khanna, Thai Kim Phung, and Dang NH Thanh. Diagnosis of breast cancer based on modern mammography using hybrid transfer learning. Multidimensional systems and signal processing, 32(2):747–765, 2021.
  • Moxey et al. [2010] Annette Moxey, Jane Robertson, David Newby, Isla Hains, Margaret Williamson, and Sallie-Anne Pearson. Computerized clinical decision support for prescribing: provision does not guarantee uptake. Journal of the American Medical Informatics Association, 17(1):25–33, 2010.
  • Liberati et al. [2017] Elisa G Liberati, Francesca Ruggiero, Laura Galuppo, Mara Gorli, Marien González-Lorenzo, Marco Maraldi, Pietro Ruggieri, Hernan Polo Friz, Giuseppe Scaratti, Koren H Kwag, et al. What hinders the uptake of computerized decision support systems in hospitals? a qualitative study and framework for implementation. Implementation Science, 12(1):1–13, 2017.
  • Teach and Shortliffe [1981] Randy L Teach and Edward H Shortliffe. An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research, 14(6):542–558, 1981.
  • Villani et al. [2018] Cédric Villani, Yann Bonnet, Charly Berthet, François Levin, Marc Schoenauer, Anne Charlotte Cornut, and Bertrand Rondepierre. Donner un sens à l’intelligence artificielle: pour une stratégie nationale et européenne. Conseil national du numérique, 2018.
  • Rudin [2019] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019.