I Introduction
MRIs provide us with the technology to detect tumors or neoplasms at an early stage and provides essential information for early disease detection, i.e., identify abnormal or diseased tissue. A neoplasm is this uncoordinated abnormal and excessive growth of a tissue occurring inside the body [1]. This growth is referred to as a tumor when it forms a mass. However, it should be noted that neoplasms do not always form a mass [2] and some do not form a tumor such as leukemia and forms of carcinoma in situ. Furthermore, the growth of a neoplasm is independent of its surrounding tissues. Regardless of removing the original growth trigger [3], the neoplasm or tumor persists to grow at an abnormal rate [4][5]. Thus, presenting a threat to the human anatomy.
Ia Neoplasms or tumors
Neoplasm can be grouped into five categories as according ICDO behavior codes [6] and ICD10 [7]. These include:

Benign neoplasms – noncancerous,

Neoplasms of uncertain and unknown behavior,

Carcinoma in situ – These will not spread and grow in situ. These could potentially be cancer [8].

Malignant neoplasms stated or presumed to be primary, of lymphoid, hematopoietic and related tissue and

Malignant neoplasms of illdefined, secondary and unspecified sites.
A malignant neoplasm or malignant tumor is also known as cancer. These cells divide and grow excessively to form lumps that are cancerous [9]. Hence, spreading to other parts of the body and invading healthy tissues. Treatments include chemotherapy and radiation therapy that are used to kill cancer cells throughout and specific parts of the body, respectively.
IB Magnetic Resonance Imaging
Rather than using Xrays or ionizing radiation like CAT or PET scans do, MRI scanners use radio waves and strong magnetic fields to produce crosssectional images of the internal anatomy of the body. An MRI system works on the principle of nuclear magnetic resonance (NMR) and consists of the following components as depicted in figure 1

The main magnet used to generate a strong uniform static field or the field. This partially polarizes the nuclear spins and causes the hydrogen atom to line up in the direction of the field. The strength of the magnetic field produced by this magnet is typically between 0.5 tesla to 2.0 tesla [10]

The magnetic field gradient system consisting of a gradient controller and a gradient coil.

The radio frequency (RF) system consists of an RF coil, an RF amplifier, and an RF controller. The RF transmitter coil generates a rotating magnetic field, , for exciting a spin system in the unmatched protons. This specific resonance frequency is based on the tissue being image and is termed as the Larmour frequency.

The receiver coil is connected to the computer system via a Digital to Analog converter (DAC). This coil converts the magnetization into an electric signal for imaging.
IC Fourier Transforms
The magnetic signal received is decomposed as the sum of a series of simple waves with varying amplitudes and frequencies using Fourier transforms (FTs)
[12]. Figure 2 illustrates this decomposition from a complicated signal to simple waves.FT isolates the critical components of an image such as by expressing the signal (i.e., a function of time) into its underlying frequencies. FT are classified as orthogonal sinusoidal basis function, and this is known as the frequency domain representation of the original signal. Equation (
1) defines the frequency domain representation or Fourier transform of a continuous function of time, [13]. While, equation (2) denotes the same equation using Euler’s formula, . Note that since is integrated out, we can rewrite as a function of . We can represent this as .(1) 
(2) 
where, and are continuous variables.
ID Image Acquisition in MRI
Since all the spin systems in the protons process at the same frequency and phase dictated by the magnetic field, , a dynamically changing gradient field is applied for the separation of spin systems [14]. This is followed by applying the FT on the digitized signal and converting it into its Fourier kspace. A kspace is where the signal is organized into its spatial frequencies and amplitude information. Figure 3 depicts this process. An inverse Fourier transform (IFT) is then applied to transform the image to the image space as shown in figure 3. This entire step by step process can be illustrated in a simplified manner by Figure 4. The figure gives us an overview of the MR imaging process from a signal processing perspective.
Ii Image Processing Techniques
The previous section discussed the definition of a tumor and the process of MRI. Furthermore, it introduced the Fourier Transform and discussed its purpose in imaging the digitized signal received from the MRI system. The following section will discuss the step by step techniques in detecting tumors in the images received from MRI scans. Following image acquisition, this section focuses on segmentation techniques such as MTs and RGTs in order to identify tumors at an early stage.
Patil and Bhalchandra present a MATLAB step by step implementation of brain tumor extraction in [15]. This method incorporates filters for noise removal, filters for enhancement, segmentation and morphological operations to detect the tumor.
Iia Preprocessing
IiA1 Acquiring a grayscale MRI scan
This is the first step of any image processing. The object of interest is captured by a sensor (e.g., camera) and then digitized using an analog to digital converter.
The acquired magnetic resonance images are represented in grayscale. The intensity or amplitude of a grayscale image is represented as a function where and are the spatial coordinates of the image. It should be noted that and are finite and discrete quantities. Since a grayscale image is represented as an 8bit image, the value ranges from 0 to 255. With 0 being the weakest intensity and represented as the color black. This is due to the absence of light. While, 1 being the strongest intensity and represented as the color white. This is caused by the “total transmission of light at all visible wavelengths” [16].
IiA2 High Pass filter for Image Sharpening
High pass filters or a sharpening filter is used for preserving all the highfrequency information in an image while reducing low frequencies. A fraction of the image after passing it through a highpass filter can be added to the original image to obtain an enhanced version of the input image [17]. However, highpass filters are very sensitive to noise as depend mainly on elevating high frequencies and attenuating lower ones.
However, Russo presents a new approach in [18] for the contrast enhancement of image based on a multiple output system. The chief advantage of this technique is the superior performance in the event of corruption due to Gaussian noise. This is done by adopting fuzzy models.
IiA3 Median filter for Image quality enhancement
Median filters are orderstatistics filters. These are a form of nonlinear smoothing operators used to perform noise reduction on an image or signal. Median filters are typically used for salt and pepper noise, also known as impulse noises, that can occur due to random bit error during image transmission or conversion [19].
The median filtering algorithm works by running through a window of entries. This window slides over the entire signal. Suppose the window is of size at position , then the input samples would be defined as – .
Figure 5 illustrates this calculation of the median value.
However, median filters present the issue of slight image blurring due as they also tend to smoothen the image details. To overcome this issue, Sun and Neuvo present a Detailpreserving median based filter in [20]. Their approach outperforms the weighted median filter [21], stack filters [22] and adaptive weighted mean filer [23]. This approach removes impulses with minimal signal distortion while being detail preserving. Furthermore, unlike median filters, the detailpreserving median filter does not affect the image if impulse corruption is absent. Hence, making it an ideal prefilter for tumor extraction.
IiB Segmentation
IiB1 Thresholding
This is considered to be the most trivial method of image segmentation [25]. Equation (3) represents the thresholding process of converting a grayscale image into a binary image
(3) 
Where, is the fixed threshold value ranging between 0 and 255 and is a binary intensity value (since it can only be 0 or 1) of a pixel at the spatial coordinate .
Some of the common thresholding techniques are explained in [27]:
IiB2 Global Thresholding (Single Threshold)
These are used when the differences between the foreground and background are very distinct. Have also proposed a novel global thresholding algorithm that uses boundary blocks for extracting a bimodal histogram [28].

Iterative Thresholding (A new iterative triclass thresholding technique) [30]– This method first uses Otsu’s method to obtain the threshold and the means of the two separated classes. The image is then separated into three classes using the means derived from the two classes. The first two classes will not be processed further. They are termed as the foreground and the background. The third class is referred to as the “To Be Determined” (TBD) region and is involved in the next iteration of triclass separation using Otsu’s method. This method identifies weak objects and reveals fine structures of complex objects better than Otsu’s original approach.

Multistage Thresholding (Quadratic Ratio Technique for Handwritten Character) – as the name suggests, QIR is used for retaining all the details of handwritten, hence it would not perform well for MRI images. Due to the use of fuzzy stage in the iteration, it performs better than other approaches for segmenting handwritten characters.
IiB3 Local Thresholding
Global thresholding methods tend to work well for medical images when the object of interest is significantly different from the background with respect to some characteristic. Such methods such as the one proposed by Bao and Zhang [32], can also be used for noise detection while preserving edges in MRI images. Such methods also tend to perform better than waveletthresholding denoising methods. Furthermore, a multilevel thresholding method suggested by Manikandan et al. in [33]
segments medical images by maximizing entropy. This method uses a real coded genetic algorithm with SBX crossover and performs more consistently for medical images.
IiB4 Watershed segmentation
The watershed transformation process treats the graylevel image as a topographic relief. The brightness or intensity of each point is treated as its altitude. Based off of a geological watershed, a drop of water falls onto the surface, seeps along a path, then reaches a local minimum. This is used in the separation of adjacent drainage basins and find watershed lines. Furthermore, as proposed by Najman and Schmitt in [35], watershed algorithms can also be specified over a continuous domain. Some of the different watershed definitions are:

Watershed by flooding – This method was proposed by Buecher and Lantuejoul in [36]. Their method extends the idea of drainage basins by continuously allowing “water” from sources to collect in the local minima until the complete relief is flooded. Furthermore, a barrier is built where the “water” sources meet. The arrangement of these barriers marks a watershed formed via flooding. One improvement of this method is the Priorityflood method [37].

Watershed by topographic distance – This definition verifies that the catchment basin is the local minimum in the topographic relief.

Watershed by the drop of water principle – This idea was formally proposed by Cousty et al. in [38]. Intuitively the watershed of relief corresponds to the distinct local minima where a “drop of water” can flow into.
IiB5 Interpixel watershed algorithm
IiB6 Meyer’s flooding algorithm
Proposed by Meyer and Maragos in [40], this multiscale segmentation scheme works on grayscale images.
A gradient image is used for the flooding process. Since successive flooding leads to the formation of adjacent catchment basins, basins emerge along the images.
Hence the noise would lead to oversegmentation of the image. Thus, requiring that the data be preprocessed. Another approach is to merge regions based on similarity criterion afterwards.
The algorithm works as described in 2.
IiB7 Region Competition
A novel algorithm proposed by Zhu and Yuille in [41] by unifies the following approaches:
This multiband image segmentation technique is derived by minimizing a generalized Bayesian and MDL criterion. Furthermore, it also combines the statistical features of region growing and geometrical features of snakes and balloon methods.
An implementation by Amo et al. [54], utilizes the region competition algorithm for road extraction from aerial images. The proposed implementation extracts roads, their centerlines, and sides. The algorithm utilizes the small changes in the curvature and radiometer of the road and its light appearance for extracting it from the aerial image. Hence, the implementation finds the region of interest, i.e., road margins, accurately and is robust. However, it requires a user to set seeds and hence is susceptible to human error [55].
IiC Morphological Operations
The last step may be morphological operations on the binary image formed. These are a collection of nonlinear operations used to extract morphological features such as the form and structure of an image. Furthermore, morphological operations can also be used to remove imperfections in the segmented image. Morphological operations are performed using a structuring element to an input image, and the value is based on two factors. These are again illustrated in Figure 8.
Some of the basic morphological operations along with their equations can be found below. Note that X is the reference image and B is the structuring element.

Erosion: Used for noise removal in the background and removal of holes in either the foreground or background. This process shrinks the foreground and enlarges the background. Given as:

Dilation: Enlarges the foreground and shrinks the background. Helps in enlarging the region of interest if it resides in the foreground. Furthermore, it is used for bridging gaps in an image since B is expanding the features of X.

Opening: Used to remove noise and Charged Coupled Defects (CCD) in images. This detail and simplifies images by rounding the corners from inside the object where the kernel fits. It is erosion followed by dilation.

Closing: Smoothens contours and maintains shapes and sizes of objects. Closing protects coarse structures, closes small gaps and rounds off concave corners. It is dilation followed by erosion.
IiD Region filling method
Region filling methods utilize morphological operations and are also termed as coloring. It is defined by equation (4)
(4) 
Where B denotes the structuring element, A denotes a set containing a subset whose elements are 8 connected boundary points of a region and k denotes the number of iterations. If the region is filled, then stop the iterations. A user could also predefine the number of iterations to fill the region.
Deb, Dutta and Roy propose a novel method for noise removal from brain images in [34]
. This method uses region filling to denoise the image. Region filling takes place by interposing the pixel values from the boundaries of the region of interest. The method suggests the use of an interpolation method based on Laplace’s equation to obtain the smoothest possible fills at the boundaries. However, this method requires user intervention to determine the region of interest. Furthermore, the selection of the region must be accurate.
Acknowledgment
The author, Jacob John would like to thank Dr. Prabu Sevugan for his continuous support throughout this paper. I would also like to thank Vellore Institute of Technology for their aid without which this paper wouldn’t have been completed.
References
 [1] NCI Dictionary of Cancer Terms. Retrieved March 21, 2019, from https://www.cancer.gov/publications/dictionaries/cancerterms/def/neoplasm.
 [2] What are Tumors. Retrieved March 21, 2019, from http://pathology.jhu.edu/pc/BasicTypes1.php.
 [3] Stedman, T. L. (2006). Stedmans medical dictionary. Philadelphia: Lippincott Williams & Wilkins.
 [4] Birbrair, A., Zhang, T., Wang, Z. M., Messi, M. L., Olson, J. D., Mintz, A., & Delbono, O. (2014). Type2 pericytes participate in normal and tumoral angiogenesis. American Journal of PhysiologyCell Physiology.
 [5] Cooper, G. M. (1992). Elements of human cancer. Boston: Jones and Bartlett.
 [6] Differences between ICDO & ICD10. Retrieved March 21, 2019, from https://training.seer.cancer.gov/coding/differences/.
 [7] Retrieved March 21, 2019, from https://icd.who.int/browse10/2010/en#/II.
 [8] Chang, A. E. (2006). Oncology an evidencebased approach. New York, NY: Springer Science Business Media.
 [9] A to Z: Neoplasm (Tumor), Malignant (for Parents)  Nemours. Retrieved March 21, 2019, from https://kidshealth.org/Nemours/en/parents/azneoplasmmalignant.html.
 [10] Edmonds, M. (2010, October 25). How MRI Works. Retrieved March 21, 2019, from https://science.howstuffworks.com/mri.htm#pt3.
 [11] Zhu, H. (2003). Medical image processing overview. University of Calgary, 127.
 [12] Gallagher, T. A., Nemeth, A. J., & HaceinBey, L. (2008). An introduction to the Fourier transform: relationship to MRI. American journal of roentgenology, 190(5), 13961405.
 [13] Gonzalez, R. C., & Wintz, P. (1977). Digital image processing(Book). Reading, Mass., AddisonWesley Publishing Co., Inc.(Applied Mathematics and Computation, (13), 451.
 [14] Liang, Z. P., & Lauterbur, P. C. (2000). Principles of magnetic resonance imaging: a signal processing perspective. SPIE Optical Engineering Press.
 [15] Patil, R. C., & Bhalchandra, A. S. (2012). Brain tumor extraction from MRI images using MATLAB. International journal of electronics, communication & soft computing science and engineering, 2(1), 14.
 [16] Johnson, S. (2006). Stephen Johnson on digital photography. Beijing: OReilly.
 [17] G. Ramponi, “Polynomial and rational operators for image processing and analysis,” in Nonlinear Image Processing, S. K. Mitra and G. Sicuranza, Eds. New York: Academic, 2000, pp. 203–223.
 [18] Russo, F. (2002). An image enhancement technique combining sharpening and noise reduction. IEEE Transactions on Instrumentation and Measurement, 51(4), 824828.
 [19] Module 5.6: Noise Smoothing. Retrieved March 23, 2019, from https://nptel.ac.in/courses/117104069/chapter_8/8_16.html.

[20]
Sun, T., & Neuvo, Y. (1994). Detailpreserving median based filters in image processing. Pattern Recognition Letters, 15(4), 341347.
 [21] Brownrigg, D. R. K. (1984). The weighted median filter. Communications of the ACM, 27(8), 807818.
 [22] Coyle, E. J., & Lin, J. H. (1988). Stack filters and the mean absolute error criterion. IEEE Transactions on Acoustics, Speech, and Signal Processing, 36(8), 12441254.
 [23] Yin, L., Astola, J., & Neuvo, Y. (1991, June). Adaptive weighted median filtering under the mean absolute error criterion. In Proc. IEEE Workshop on Visual Signal Processing and Communications (pp. 184187).
 [24] Fisher, R. Image Processing Learning Resources. Retrieved March 23, 2019, from https://homepages.inf.ed.ac.uk/rbf/HIPR2/.

[25]
Shapiro, L. G., & Stockman, G. C. (2001). Computer vision. Upper Saddle River, NJ: Prentice Hall.
 [26] Thresholding (image processing). (2019, March 18). Retrieved March 24, 2019, from https://en.wikipedia.org/wiki/Thresholding_(image_processing)#Shapiro2001.
 [27] Senthilkumaran, N., & Vaithegi, S. (2016). Image segmentation by using thresholding techniques for medical images. Computer Science & Engineering: An International Journal, 6(1), 113.
 [28] Jang, J. W., Lee, S., Hwang, H. J., & Baek, K. R. (2013, October). Global thresholding algorithm based on boundary selection. In 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013) (pp. 704706). IEEE.
 [29] Otsu, N. (1979). A threshold selection method from graylevel histograms. IEEE transactions on systems, man, and cybernetics, 9(1), 6266.
 [30] Cai, H., Yang, Z., Cao, X., Xia, W., & Xu, X. (2014). A new iterative triclass thresholding technique in image segmentation. IEEE transactions on image processing, 23(3), 10381046.
 [31] Arora, S., Acharya, J., Verma, A., & Panigrahi, P. K. (2008). Multilevel thresholding for image segmentation through a fast statistical recursive algorithm. Pattern Recognition Letters, 29(2), 119125.
 [32] Bao, P., & Zhang, L. (2003). Noise reduction for magnetic resonance images via adaptive multiscale products thresholding. IEEE transactions on medical imaging, 22(9), 10891099.
 [33] Manikandan, S., Ramar, K., Iruthayarajan, M. W., & Srinivasagan, K. G. (2014). Multilevel thresholding for segmentation of medical brain images using real coded genetic algorithm. Measurement, 47, 558568.
 [34] Deb, D., Dutta, B., & Roy, S. (2014, May). A noble approach for noise removal from brain image using Region Filling. In 2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies (pp. 14031406). IEEE.
 [35] Najman, L., & Schmitt, M. (1994). Definitions and some properties of the watershed of a continuous function. In Image Processing: Theory and Applications (No. 1, pp. 151153). Elsevier.
 [36] S., Buecher, & C., Lantuejoul (1979). Use of Watersheds in Contour Detection. Centre De Géostatistique Et De Morphologie Mathématique,2.12.12. Retrieved March 25, 2019, from http://www.cmm.minesparistech.fr/~beucher/publi/watershed.pdf.
 [37] Barnes, R., Lehman, C., & Mulla, D. (2014). Priorityflood: An optimal depressionfilling and watershedlabeling algorithm for digital elevation models. Computers & Geosciences, 62, 117127.
 [38] Cousty, J., Bertrand, G., Najman, L., & Couprie, M. (2009). Watershed cuts: Minimum spanning forests and the drop of water principle. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(8), 13621374.
 [39] Beucher, S., & Meyer, F. (1992). The morphological approach to segmentation: the watershed transformation. Optical EngineeringNew YorkMarcel Dekker Incorporated, 34, 433433.
 [40] Meyer, F., & Maragos, P. (1999, September). Multiscale morphological segmentations based on watershed, flooding, and eikonal PDE. In International Conference on ScaleSpace Theories in Computer Vision (pp. 351362). Springer, Berlin, Heidelberg.
 [41] Zhu, S. C., & Yuille, A. (1996). Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (9), 884900.
 [42] Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International journal of computer vision, 1(4), 321331.
 [43] Cohen, L. D., & Cohen, I. (1990, December). A finite element method applied to new active contour models and 3D reconstruction from cross sections.
 [44] Cohen, L. D. (1991). On active contour models and balloons. CVGIP: Image understanding, 53(2), 211218.
 [45] Ronfard, R. (1994). Regionbased strategies for active contour models. International journal of computer vision, 13(2), 229251.
 [46] Xu, G., Segawa, E., & Tsuji, S. (1994). Robust active contours with insensitive parameters. Pattern Recognition, 27(7), 879884.
 [47] Beveridge, J. R., Griffith, J., Kohler, R. R., Hanson, A. R., & Riseman, E. M. (1989). Segmenting images using localized histograms and region merging. International Journal of Computer Vision, 2(3), 311347.
 [48] Adams, R., & Bischof, L. (1994). Seeded region growing. IEEE Transactions on pattern analysis and machine intelligence, 16(6), 641647.

[49]
Leonardis, A., Gupta, A., & Bajcsy, R. (1995). Segmentation of range images as the search for geometric parametric models. International Journal of Computer Vision, 14(3), 253277.

[50]
Nadabar, S. G., & Jain, A. K. (1996). Parameter estimation in Markov random field contextual models using geometric models of objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(3), 326329.
 [51] Geman, D., Geman, S., Graffigne, C., & Dong, P. (1990). Boundary detection by constrained optimization. IEEE transactions on pattern analysis and machine intelligence, 12(7), 609628.
 [52] Keeler, K. (1991, June). Map representations and codingbased priors for segmentation. In Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 420425). IEEE.
 [53] Leclerc, Y. G. (1989). Constructing simple stable descriptions for image partitioning. International journal of computer vision, 3(1), 73102.
 [54] Amo, M., Martínez, F., & Torre, M. (2006). Road extraction from aerial images using a region competition algorithm. IEEE transactions on image processing, 15(5), 11921201.

[55]
Kapoor, L., & Thakur, S. (2017, January). A survey on brain tumor detection using image processing techniques. In 2017 7th International Conference on Cloud Computing, Data Science & EngineeringConfluence (pp. 582585). IEEE.
Comments
There are no comments yet.