Image Processing Techniques for identifying tumors in an MRI image

03/28/2021 ∙ by Jacob John, et al. ∙ 0

Medical Resonance Imaging or MRI is a medical image processing technique that used radio waves to scan the body. It is a tomographic imaging technique, principally used in the field of radiology. With the advantage of being a painless diagnostic procedure, MRI allows medical personnel to illustrate clear pictures of the anatomy and the physiological processes occurring in the body, thus allowing early detection and treatment of diseases. These images, combined with image processing techniques may be used in the detection of tumors, difficult to identify with the naked eye. This digital assignment surveys the different image processing techniques used in Automated Tumor Detection (ATD). This assignment initiates the discussion with a comparison of traditional techniques such as Morphological Tools (MT) and Region Growing Technique (RGT).



There are no comments yet.


page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

MRIs provide us with the technology to detect tumors or neoplasms at an early stage and provides essential information for early disease detection, i.e., identify abnormal or diseased tissue. A neoplasm is this uncoordinated abnormal and excessive growth of a tissue occurring inside the body [1]. This growth is referred to as a tumor when it forms a mass. However, it should be noted that neoplasms do not always form a mass [2] and some do not form a tumor such as leukemia and forms of carcinoma in situ. Furthermore, the growth of a neoplasm is independent of its surrounding tissues. Regardless of removing the original growth trigger [3], the neoplasm or tumor persists to grow at an abnormal rate [4][5]. Thus, presenting a threat to the human anatomy.

I-a Neoplasms or tumors

Neoplasm can be grouped into five categories as according ICD-O behavior codes [6] and ICD-10 [7]. These include:

  • Benign neoplasms – noncancerous,

  • Neoplasms of uncertain and unknown behavior,

  • Carcinoma in situ – These will not spread and grow in situ. These could potentially be cancer [8].

  • Malignant neoplasms stated or presumed to be primary, of lymphoid, hematopoietic and related tissue and

  • Malignant neoplasms of ill-defined, secondary and unspecified sites.

A malignant neoplasm or malignant tumor is also known as cancer. These cells divide and grow excessively to form lumps that are cancerous [9]. Hence, spreading to other parts of the body and invading healthy tissues. Treatments include chemotherapy and radiation therapy that are used to kill cancer cells throughout and specific parts of the body, respectively.

I-B Magnetic Resonance Imaging

Rather than using X-rays or ionizing radiation like CAT or PET scans do, MRI scanners use radio waves and strong magnetic fields to produce cross-sectional images of the internal anatomy of the body. An MRI system works on the principle of nuclear magnetic resonance (NMR) and consists of the following components as depicted in figure 1

  • The main magnet used to generate a strong uniform static field or the field. This partially polarizes the nuclear spins and causes the hydrogen atom to line up in the direction of the field. The strength of the magnetic field produced by this magnet is typically between 0.5 tesla to 2.0 tesla [10]

  • The magnetic field gradient system consisting of a gradient controller and a gradient coil.

  • The radio frequency (RF) system consists of an RF coil, an RF amplifier, and an RF controller. The RF transmitter coil generates a rotating magnetic field, , for exciting a spin system in the unmatched protons. This specific resonance frequency is based on the tissue being image and is termed as the Larmour frequency.

  • The receiver coil is connected to the computer system via a Digital to Analog converter (DAC). This coil converts the magnetization into an electric signal for imaging.

Fig. 1: A Magnetic Resonance Imaging System (MRI), simplified. [11]

I-C Fourier Transforms

The magnetic signal received is decomposed as the sum of a series of simple waves with varying amplitudes and frequencies using Fourier transforms (FTs)

[12]. Figure 2 illustrates this decomposition from a complicated signal to simple waves.

Fig. 2: Generating a complicated signal by superimposing three simpler waves. [12]

FT isolates the critical components of an image such as by expressing the signal (i.e., a function of time) into its underlying frequencies. FT are classified as orthogonal sinusoidal basis function, and this is known as the frequency domain representation of the original signal. Equation (

1) defines the frequency domain representation or Fourier transform of a continuous function of time, [13]. While, equation (2) denotes the same equation using Euler’s formula, . Note that since is integrated out, we can rewrite as a function of . We can represent this as .


where, and are continuous variables.

I-D Image Acquisition in MRI

Since all the spin systems in the protons process at the same frequency and phase dictated by the magnetic field, , a dynamically changing gradient field is applied for the separation of spin systems [14]. This is followed by applying the FT on the digitized signal and converting it into its Fourier k-space. A k-space is where the signal is organized into its spatial frequencies and amplitude information. Figure 3 depicts this process. An inverse Fourier transform (IFT) is then applied to transform the image to the image space as shown in figure 3. This entire step by step process can be illustrated in a simplified manner by Figure 4. The figure gives us an overview of the MR imaging process from a signal processing perspective.

Fig. 3: A small part of a coronal slice of a brain interrogated for all its spatial frequencies and amplitude information in Fourier k-spaces. The summation of relative frequencies and the IFT of all other points in k-space contributes to give the image space. [12]

Fig. 4: The MR imaging process used in image acquisition. A simplified overview. [11]

Ii Image Processing Techniques

The previous section discussed the definition of a tumor and the process of MRI. Furthermore, it introduced the Fourier Transform and discussed its purpose in imaging the digitized signal received from the MRI system. The following section will discuss the step by step techniques in detecting tumors in the images received from MRI scans. Following image acquisition, this section focuses on segmentation techniques such as MTs and RGTs in order to identify tumors at an early stage.

Patil and Bhalchandra present a MATLAB step by step implementation of brain tumor extraction in [15]. This method incorporates filters for noise removal, filters for enhancement, segmentation and morphological operations to detect the tumor.

Ii-a Preprocessing

Ii-A1 Acquiring a grayscale MRI scan

This is the first step of any image processing. The object of interest is captured by a sensor (e.g., camera) and then digitized using an analog to digital converter.

The acquired magnetic resonance images are represented in grayscale. The intensity or amplitude of a grayscale image is represented as a function where and are the spatial coordinates of the image. It should be noted that and are finite and discrete quantities. Since a grayscale image is represented as an 8-bit image, the value ranges from 0 to 255. With 0 being the weakest intensity and represented as the color black. This is due to the absence of light. While, 1 being the strongest intensity and represented as the color white. This is caused by the “total transmission of light at all visible wavelengths” [16].

Ii-A2 High Pass filter for Image Sharpening

High pass filters or a sharpening filter is used for preserving all the high-frequency information in an image while reducing low frequencies. A fraction of the image after passing it through a high-pass filter can be added to the original image to obtain an enhanced version of the input image [17]. However, high-pass filters are very sensitive to noise as depend mainly on elevating high frequencies and attenuating lower ones.

However, Russo presents a new approach in [18] for the contrast enhancement of image based on a multiple output system. The chief advantage of this technique is the superior performance in the event of corruption due to Gaussian noise. This is done by adopting fuzzy models.

Ii-A3 Median filter for Image quality enhancement

Median filters are order-statistics filters. These are a form of nonlinear smoothing operators used to perform noise reduction on an image or signal. Median filters are typically used for salt and pepper noise, also known as impulse noises, that can occur due to random bit error during image transmission or conversion [19].

The median filtering algorithm works by running through a window of entries. This window slides over the entire signal. Suppose the window is of size at position , then the input samples would be defined as – .

Figure 5 illustrates this calculation of the median value.

Fig. 5: Calculation of median value using the neighborhood values[24]

However, median filters present the issue of slight image blurring due as they also tend to smoothen the image details. To overcome this issue, Sun and Neuvo present a Detail-preserving median based filter in [20]. Their approach outperforms the weighted median filter [21], stack filters [22] and adaptive weighted mean filer [23]. This approach removes impulses with minimal signal distortion while being detail preserving. Furthermore, unlike median filters, the detail-preserving median filter does not affect the image if impulse corruption is absent. Hence, making it an ideal prefilter for tumor extraction.

Ii-B Segmentation

Ii-B1 Thresholding

This is considered to be the most trivial method of image segmentation [25]. Equation (3) represents the thresholding process of converting a grayscale image into a binary image


Where, is the fixed threshold value ranging between 0 and 255 and is a binary intensity value (since it can only be 0 or 1) of a pixel at the spatial coordinate .

Fig. 6: The effect of thresholding (right) on an image (left).[26]

Some of the common thresholding techniques are explained in [27]:

Ii-B2 Global Thresholding (Single Threshold)

These are used when the differences between the foreground and background are very distinct. Have also proposed a novel global thresholding algorithm that uses boundary blocks for extracting a bimodal histogram [28].

  • Traditional Thresholding (Otsu’s method) [29]

    – used when the image has two distinct peaks in its histogram representation. This method calculates the optimum threshold separating the two classes such that their inter-class variance is maximum.

  • Iterative Thresholding (A new iterative triclass thresholding technique) [30]– This method first uses Otsu’s method to obtain the threshold and the means of the two separated classes. The image is then separated into three classes using the means derived from the two classes. The first two classes will not be processed further. They are termed as the foreground and the background. The third class is referred to as the “To Be Determined” (TBD) region and is involved in the next iteration of triclass separation using Otsu’s method. This method identifies weak objects and reveals fine structures of complex objects better than Otsu’s original approach.

  • Multistage Thresholding (Quadratic Ratio Technique for Handwritten Character) – as the name suggests, QIR is used for retaining all the details of handwritten, hence it would not perform well for MRI images. Due to the use of fuzzy stage in the iteration, it performs better than other approaches for segmenting handwritten characters.

Ii-B3 Local Thresholding

  • Single Threshold – uses a single threshold value as described in equation (3)

  • Multiple Threshold [31] – segments image into multiple levels using its mean and variance.

Fig. 7: Some popular methods for image thresholding.[27]

Global thresholding methods tend to work well for medical images when the object of interest is significantly different from the background with respect to some characteristic. Such methods such as the one proposed by Bao and Zhang [32], can also be used for noise detection while preserving edges in MRI images. Such methods also tend to perform better than wavelet-thresholding denoising methods. Furthermore, a multilevel thresholding method suggested by Manikandan et al. in [33]

segments medical images by maximizing entropy. This method uses a real coded genetic algorithm with SBX crossover and performs more consistently for medical images.

Ii-B4 Watershed segmentation

The watershed transformation process treats the gray-level image as a topographic relief. The brightness or intensity of each point is treated as its altitude. Based off of a geological watershed, a drop of water falls onto the surface, seeps along a path, then reaches a local minimum. This is used in the separation of adjacent drainage basins and find watershed lines. Furthermore, as proposed by Najman and Schmitt in [35], watershed algorithms can also be specified over a continuous domain. Some of the different watershed definitions are:

  • Watershed by flooding – This method was proposed by Buecher and Lantuejoul in [36]. Their method extends the idea of drainage basins by continuously allowing “water” from sources to collect in the local minima until the complete relief is flooded. Furthermore, a barrier is built where the “water” sources meet. The arrangement of these barriers marks a watershed formed via flooding. One improvement of this method is the Priority-flood method [37].

  • Watershed by topographic distance – This definition verifies that the catchment basin is the local minimum in the topographic relief.

  • Watershed by the drop of water principle – This idea was formally proposed by Cousty et al. in [38]. Intuitively the watershed of relief corresponds to the distinct local minima where a “drop of water” can flow into.

Ii-B5 Inter-pixel watershed algorithm

This approach was proposed by Beucher and Meyer in [39]. The algorithm can be described as 1.

Initialize a set S with distinct label nodes for each minimum;
for S  do
       Extract a node x of minimum altitude;
       Attribute the label of x to each non-labeled node y neighboring x;
       Insert y into set S;
end for
Algorithm 1 Inter-pixel watershed algorithm

Ii-B6 Meyer’s flooding algorithm

Proposed by Meyer and Maragos in [40], this multiscale segmentation scheme works on grayscale images.

A gradient image is used for the flooding process. Since successive flooding leads to the formation of adjacent catchment basins, basins emerge along the images.

Hence the noise would lead to over-segmentation of the image. Thus, requiring that the data be preprocessed. Another approach is to merge regions based on similarity criterion afterwards.

The algorithm works as described in 2.

Result: non-labeled pixels as watershed lines
Initialize a random set of seed markers for flooding, each with a distinct label;
for neighboring pixels of each marker do
       Enqueue pixel to priority queue P (Associated priority is the gradient magnitude of each pixel);
end for
for P  do
       Dequeue pixel with least priority;
       if neighboring pixels of have the same labels then
             label with neighbors’ label;
       end if
      Enqueue unlabeled neighboring pixels of
end for
Algorithm 2 Meyer’s flooding algorithm

Ii-B7 Region Competition

A novel algorithm proposed by Zhu and Yuille in [41] by unifies the following approaches:

  • snakes [42] and balloon methods [43][44][45][46]

  • region growing and merging techniques [47][48][49]

  • Bayesian [50][51]and Minimum Description Length (MDL) Criteria [52][53]

This multiband image segmentation technique is derived by minimizing a generalized Bayesian and MDL criterion. Furthermore, it also combines the statistical features of region growing and geometrical features of snakes and balloon methods.

An implementation by Amo et al. [54], utilizes the region competition algorithm for road extraction from aerial images. The proposed implementation extracts roads, their centerlines, and sides. The algorithm utilizes the small changes in the curvature and radiometer of the road and its light appearance for extracting it from the aerial image. Hence, the implementation finds the region of interest, i.e., road margins, accurately and is robust. However, it requires a user to set seeds and hence is susceptible to human error [55].

Ii-C Morphological Operations

The last step may be morphological operations on the binary image formed. These are a collection of non-linear operations used to extract morphological features such as the form and structure of an image. Furthermore, morphological operations can also be used to remove imperfections in the segmented image. Morphological operations are performed using a structuring element to an input image, and the value is based on two factors. These are again illustrated in Figure 8.

Fig. 8: A is a fit, B is a hit, C is neither a fit nor a hit, hence we term it as a miss. [55]
  • Fit: All pixels on structuring element matches the pixels of the input image (A in Fig. 8)

  • Hit: Any pixel on structuring element matches the pixels of the input image (B in Fig. 8)

Some of the basic morphological operations along with their equations can be found below. Note that X is the reference image and B is the structuring element.

  • Erosion: Used for noise removal in the background and removal of holes in either the foreground or background. This process shrinks the foreground and enlarges the background. Given as:

  • Dilation: Enlarges the foreground and shrinks the background. Helps in enlarging the region of interest if it resides in the foreground. Furthermore, it is used for bridging gaps in an image since B is expanding the features of X.

  • Opening: Used to remove noise and Charged Coupled Defects (CCD) in images. This detail and simplifies images by rounding the corners from inside the object where the kernel fits. It is erosion followed by dilation.

  • Closing: Smoothens contours and maintains shapes and sizes of objects. Closing protects coarse structures, closes small gaps and rounds off concave corners. It is dilation followed by erosion.

Ii-D Region filling method

Region filling methods utilize morphological operations and are also termed as coloring. It is defined by equation (4)


Where B denotes the structuring element, A denotes a set containing a subset whose elements are 8 connected boundary points of a region and k denotes the number of iterations. If the region is filled, then stop the iterations. A user could also predefine the number of iterations to fill the region.

Deb, Dutta and Roy propose a novel method for noise removal from brain images in [34]

. This method uses region filling to denoise the image. Region filling takes place by interposing the pixel values from the boundaries of the region of interest. The method suggests the use of an interpolation method based on Laplace’s equation to obtain the smoothest possible fills at the boundaries. However, this method requires user intervention to determine the region of interest. Furthermore, the selection of the region must be accurate.


The author, Jacob John would like to thank Dr. Prabu Sevugan for his continuous support throughout this paper. I would also like to thank Vellore Institute of Technology for their aid without which this paper wouldn’t have been completed.


  • [1] NCI Dictionary of Cancer Terms. Retrieved March 21, 2019, from
  • [2] What are Tumors. Retrieved March 21, 2019, from
  • [3] Stedman, T. L. (2006). Stedmans medical dictionary. Philadelphia: Lippincott Williams & Wilkins.
  • [4] Birbrair, A., Zhang, T., Wang, Z. M., Messi, M. L., Olson, J. D., Mintz, A., & Delbono, O. (2014). Type-2 pericytes participate in normal and tumoral angiogenesis. American Journal of Physiology-Cell Physiology.
  • [5] Cooper, G. M. (1992). Elements of human cancer. Boston: Jones and Bartlett.
  • [6] Differences between ICD-O & ICD-10. Retrieved March 21, 2019, from
  • [7] Retrieved March 21, 2019, from
  • [8] Chang, A. E. (2006). Oncology an evidence-based approach. New York, NY: Springer Science Business Media.
  • [9] A to Z: Neoplasm (Tumor), Malignant (for Parents) - Nemours. Retrieved March 21, 2019, from
  • [10] Edmonds, M. (2010, October 25). How MRI Works. Retrieved March 21, 2019, from
  • [11] Zhu, H. (2003). Medical image processing overview. University of Calgary, 1-27.
  • [12] Gallagher, T. A., Nemeth, A. J., & Hacein-Bey, L. (2008). An introduction to the Fourier transform: relationship to MRI. American journal of roentgenology, 190(5), 1396-1405.
  • [13] Gonzalez, R. C., & Wintz, P. (1977). Digital image processing(Book). Reading, Mass., Addison-Wesley Publishing Co., Inc.(Applied Mathematics and Computation, (13), 451.
  • [14] Liang, Z. P., & Lauterbur, P. C. (2000). Principles of magnetic resonance imaging: a signal processing perspective. SPIE Optical Engineering Press.
  • [15] Patil, R. C., & Bhalchandra, A. S. (2012). Brain tumor extraction from MRI images using MATLAB. International journal of electronics, communication & soft computing science and engineering, 2(1), 1-4.
  • [16] Johnson, S. (2006). Stephen Johnson on digital photography. Beijing: OReilly.
  • [17] G. Ramponi, “Polynomial and rational operators for image processing and analysis,” in Nonlinear Image Processing, S. K. Mitra and G. Sicuranza, Eds. New York: Academic, 2000, pp. 203–223.
  • [18] Russo, F. (2002). An image enhancement technique combining sharpening and noise reduction. IEEE Transactions on Instrumentation and Measurement, 51(4), 824-828.
  • [19] Module 5.6: Noise Smoothing. Retrieved March 23, 2019, from
  • [20]

    Sun, T., & Neuvo, Y. (1994). Detail-preserving median based filters in image processing. Pattern Recognition Letters, 15(4), 341-347.

  • [21] Brownrigg, D. R. K. (1984). The weighted median filter. Communications of the ACM, 27(8), 807-818.
  • [22] Coyle, E. J., & Lin, J. H. (1988). Stack filters and the mean absolute error criterion. IEEE Transactions on Acoustics, Speech, and Signal Processing, 36(8), 1244-1254.
  • [23] Yin, L., Astola, J., & Neuvo, Y. (1991, June). Adaptive weighted median filtering under the mean absolute error criterion. In Proc. IEEE Workshop on Visual Signal Processing and Communications (pp. 184-187).
  • [24] Fisher, R. Image Processing Learning Resources. Retrieved March 23, 2019, from
  • [25]

    Shapiro, L. G., & Stockman, G. C. (2001). Computer vision. Upper Saddle River, NJ: Prentice Hall.

  • [26] Thresholding (image processing). (2019, March 18). Retrieved March 24, 2019, from
  • [27] Senthilkumaran, N., & Vaithegi, S. (2016). Image segmentation by using thresholding techniques for medical images. Computer Science & Engineering: An International Journal, 6(1), 1-13.
  • [28] Jang, J. W., Lee, S., Hwang, H. J., & Baek, K. R. (2013, October). Global thresholding algorithm based on boundary selection. In 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013) (pp. 704-706). IEEE.
  • [29] Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1), 62-66.
  • [30] Cai, H., Yang, Z., Cao, X., Xia, W., & Xu, X. (2014). A new iterative triclass thresholding technique in image segmentation. IEEE transactions on image processing, 23(3), 1038-1046.
  • [31] Arora, S., Acharya, J., Verma, A., & Panigrahi, P. K. (2008). Multilevel thresholding for image segmentation through a fast statistical recursive algorithm. Pattern Recognition Letters, 29(2), 119-125.
  • [32] Bao, P., & Zhang, L. (2003). Noise reduction for magnetic resonance images via adaptive multiscale products thresholding. IEEE transactions on medical imaging, 22(9), 1089-1099.
  • [33] Manikandan, S., Ramar, K., Iruthayarajan, M. W., & Srinivasagan, K. G. (2014). Multilevel thresholding for segmentation of medical brain images using real coded genetic algorithm. Measurement, 47, 558-568.
  • [34] Deb, D., Dutta, B., & Roy, S. (2014, May). A noble approach for noise removal from brain image using Region Filling. In 2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies (pp. 1403-1406). IEEE.
  • [35] Najman, L., & Schmitt, M. (1994). Definitions and some properties of the watershed of a continuous function. In Image Processing: Theory and Applications (No. 1, pp. 151-153). Elsevier.
  • [36] S., Buecher, & C., Lantuejoul (1979). Use of Watersheds in Contour Detection. Centre De Géostatistique Et De Morphologie Mathématique,2.1-2.12. Retrieved March 25, 2019, from
  • [37] Barnes, R., Lehman, C., & Mulla, D. (2014). Priority-flood: An optimal depression-filling and watershed-labeling algorithm for digital elevation models. Computers & Geosciences, 62, 117-127.
  • [38] Cousty, J., Bertrand, G., Najman, L., & Couprie, M. (2009). Watershed cuts: Minimum spanning forests and the drop of water principle. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(8), 1362-1374.
  • [39] Beucher, S., & Meyer, F. (1992). The morphological approach to segmentation: the watershed transformation. Optical Engineering-New York-Marcel Dekker Incorporated-, 34, 433-433.
  • [40] Meyer, F., & Maragos, P. (1999, September). Multiscale morphological segmentations based on watershed, flooding, and eikonal PDE. In International Conference on Scale-Space Theories in Computer Vision (pp. 351-362). Springer, Berlin, Heidelberg.
  • [41] Zhu, S. C., & Yuille, A. (1996). Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (9), 884-900.
  • [42] Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International journal of computer vision, 1(4), 321-331.
  • [43] Cohen, L. D., & Cohen, I. (1990, December). A finite element method applied to new active contour models and 3D reconstruction from cross sections.
  • [44] Cohen, L. D. (1991). On active contour models and balloons. CVGIP: Image understanding, 53(2), 211-218.
  • [45] Ronfard, R. (1994). Region-based strategies for active contour models. International journal of computer vision, 13(2), 229-251.
  • [46] Xu, G., Segawa, E., & Tsuji, S. (1994). Robust active contours with insensitive parameters. Pattern Recognition, 27(7), 879-884.
  • [47] Beveridge, J. R., Griffith, J., Kohler, R. R., Hanson, A. R., & Riseman, E. M. (1989). Segmenting images using localized histograms and region merging. International Journal of Computer Vision, 2(3), 311-347.
  • [48] Adams, R., & Bischof, L. (1994). Seeded region growing. IEEE Transactions on pattern analysis and machine intelligence, 16(6), 641-647.
  • [49]

    Leonardis, A., Gupta, A., & Bajcsy, R. (1995). Segmentation of range images as the search for geometric parametric models. International Journal of Computer Vision, 14(3), 253-277.

  • [50]

    Nadabar, S. G., & Jain, A. K. (1996). Parameter estimation in Markov random field contextual models using geometric models of objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(3), 326-329.

  • [51] Geman, D., Geman, S., Graffigne, C., & Dong, P. (1990). Boundary detection by constrained optimization. IEEE transactions on pattern analysis and machine intelligence, 12(7), 609-628.
  • [52] Keeler, K. (1991, June). Map representations and coding-based priors for segmentation. In Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 420-425). IEEE.
  • [53] Leclerc, Y. G. (1989). Constructing simple stable descriptions for image partitioning. International journal of computer vision, 3(1), 73-102.
  • [54] Amo, M., Martínez, F., & Torre, M. (2006). Road extraction from aerial images using a region competition algorithm. IEEE transactions on image processing, 15(5), 1192-1201.
  • [55]

    Kapoor, L., & Thakur, S. (2017, January). A survey on brain tumor detection using image processing techniques. In 2017 7th International Conference on Cloud Computing, Data Science & Engineering-Confluence (pp. 582-585). IEEE.