Paleness or pallor is a manifestation of anemia, defined as abnormally low hemoglobin concentrations in the blood, which can be caused by blood loss, malnutrition, or other pathologies. Hemoglobin serves a vital role in the blood, carrying oxygen to the tissues. While anemia affects around 1.6 billion people worldwide, it is known to affect women and preschool children 5-8 times more than men 
. The Centers for Disease Control and Prevention estimate that the number of visits to emergency departments in North America with anemia as the primary hospital discharge diagnosis have been steadily increasing from 1990-2011. Chronic anemia can contribute to problems such as chronic fatigue, or more severe problems such as heart failure, limb ischemia, and pregnancy complications. With such large percentages of the present-day population at risk of the detrimental impacts of anemia, the design of computer aided diagnostic (CAD) systems that can screen patients with anemia from the normal patients by detecting pallor non-invasively becomes necessary. CAD and point of care (POC) applications are aimed at providing quick “expert” diagnostics for screening and resourcefulness of treatment and care-delivery protocols. Also, such systems are useful especially in a telemedicine paradigm, where, the patient and the care-provider may not be geographically co-located. Facial images have been extensively useful for security, authentication, identification and expression detection purposes . This work is aimed at utilizing facial pallor site images for anemia-like medical screening applications.
This paper makes three key contributions. First, spatial, color-based and gradient based features are analyzed to detect the optimal combination for prediction of patient pallor. We observe that Frangi-filtering and gradient filtering enhance the image separability for pallor severity in eye and tongue pallor site images, respectively. Second, a hierarchical classification strategy is proposed using an optimal set of color-based and intensity-based features for screening normal, anemic and abnormal pallor site images. We observe 72-86% separability of normal from abnormal images for eye and tongue pallor site images, respectively. Third, the discriminating contribution of each pallor site image for anemia-like pallor is analyzed. Here, we observe that the eye pallor site has higher area under Receiver Operating Characteristic curves (AUC) when compared to that of the tongue pallor site images.
Ii Materials and Methods
Based on prior works that analyze pallor site images for anemia-like diagnosis , this work focusses on the eye images with visible conjunctiva and tongue images for anemia-like pallor detection. There are two primary objectives of the analysis presented in this work. First, the color and region-based features in each pallor site image are analyzed to determine the most discriminating features for pallor classification. Second, the importance of the eye and tongue pallor sites are assessed to identify the most significant pallor site for anemia-like pathology classification. A description of the image data sets under analysis and the proposed methods are given below.
A set of 27 eye images and 56 tongue images are collected and manually annotated for subjective pallor indices. These images represent uncontrolled imaging conditions and a wide variety in patient demographics. Each pallor site image has dimensions ranging from [155x240] to [960x1280] pixels with storage size of 8kB to 252kB per image. Also, every pallor site image that is gathered from public domain sources, is manually annotated for pallor severity grade. While grade 0 refers to normal patients, grade 1 refers to patients with anemia-like pathologies and grade 2 is indicative of pathologies/abnormalities associated with the specific pallor site that is not anemia-like. Examples of sample images corresponding to each severity grade from the eye and tongue pallor site images are shown in Fig. 1
. The goal of the overall automated system is to classify each pallor site image with output class label [0, 1, 2], representative of patient’s anemia-like pallor.
For the eye and tongue pallor site images, the numbers of images belonging to the sample class labels representative of severity [0,1,2] are [6, 7, 14], and [18, 3, 35] respectively. For homogeneous processing purposes, each image (
) is resized to [125x125] pixels each. For automated pattern recognition from the pallor site images, the eye and tongue data sets are partitioned into training and test data sets, respectively. Due to the limited data sizes, feature learning and data modeling is performed using 5-fold cross validation for the eye images and 3-fold cross validation for tongue images. This is to ensure similar proportions of class sample data in each of the folded data sets.
Ii-B Proposed Methods
Since analysis of pallor site images for anemia-like pathologies is a novel idea, there are no existing methods in literature. However, based on the variabilities introduced by the data sets, two data models are analyzed for pallor severity grade classification tasks. The first model () is designed to detect specific spatial regions of interest (ROIs) that are indicative of patient pallor. In this model, the pallor site images are segmented to extract several ROIs, followed by extraction of pixel intensity features corresponding to color plane and gradient images within the ROIs. Next, the intensity-based features are ranked to detect the most discriminating set of features from the training data set that ensure maximum accuracy in the validation data set . Finally, the most optimal feature set is utilized for pallor severity grade classification.
The second model () is designed to detect the most significant color planes and gradient images for pallor classification. For each pre-processed color plane image, a mask ‘’ of the eye or tongue region is detected. Color plane transformations are then applied to each pallor-site image, thus resulting in the following 12 image planes: red, green, blue, hue, saturation, intensity (from RGB to HSV transformation), lightness, a-color plane, b-color plane (from RGB to Lab transformation), luminance, 2 chrominance planes (from RGB to Ycbcr transformation). Next the first order gradient filtered image in horizontal and vertical directions is extracted from each color image plane () and superimposed on the color image plane itself, thereby resulting in 12 additional images. Finally, each color image plane is Frangi-filtered  to extract the second order edges () and superimposed on the image itself, generating 12 additional images per pallor site image. Using this process, 36 color and edge enhanced images are extrapolated per pallor site image. Using cross-validation method, the 36 extrapolated images are ranked to identify the most discriminating image color and edge enhancement procedure for pallor classification.
Ii-B1 Image Segmentation
The first step for model involves spatial segmentation of the pallor site image into several ROIs. For the eye pallor site images, the sclera and conjunctiva regions while for the tongue pallor site images the inner and outer tongue regions would constitute the different ROIs. For segmentation of eye images, the first step is detection of the iris, followed by detection of the surrounding sclera, followed by conjunctiva detection. The steps for detecting as the iris, sclera and conjunctiva regions, respectively are shown in (1)-(5). First, the scaled red plane image in [0,1] pixel range is thresholded to detect dark regions with area greater than 100 pixels only in (1). These regions are represented by . The is the dark region with the most elliptical shape (i.e., highest ratio of major axis length () and minor axis lengths () in (2). Next, the green plane image within the masked region is subjected to watershed transformation using circular structuring element () of radius 5 in (3). This results in several sub-region segmented image . The iris region is removed from the image followed by detection of the remaining sub-regions in that intersect with the edge of in (4). Since the sclera region lies right outside the iris, the edges of the sclera region and the iris region intersect. Finally, the conjunctiva region is detected as the remaining regions in mask after removing the iris and sclera regions in (5). For the tongue pallor site images, the masked green plane image within masked region is subjected to watershed transformation, thereby resulting in image with several sub-regions . Next, the outer edge of the tongue is detected in image using the ‘Sobel’ filter. The sub-regions in that intersect with the outer tongue edge regions represent the outer regions in the tongue (). The remaining regions in the after removing the regions represent the inner tongue regions ().
Ii-B2 Color Planes and Gradient Feature Extraction
For model , 54 features are extracted per image using pixel intensity-features from color and gradient transformed images from various segmented sub-regions in each image as shown in Fig. 2
. For the eye images, 27 features are extracted for the sclera and conjunctiva region each, corresponding to the max, mean, variance of pixels in the following image planes: red, blue, green, hue, saturation, intensity,, . For the tongue images, 27 similar features are extracted for the inner and outer tongue regions, each.
The final step in data models and involve classification using a family of data models implemented on the Microsoft Azure Machine Learning Studio (MAMLS) platform for scalability. These classifiers, called Azure-based Generalized flow for Medical Image Classification (AGMIC) , involve grid-search based hyper-parameterization of several data models and selection of the optimal data model with highest classification accuracy on the validation data set. Here, automated parametrization of 8 sets of data models is performed 
including support vector machines, logistic regression, boosted decision tree, decision forest, decision jungle, neural networks, Poisson regression and k-nearest neighbors. At the the end of the training process, one optimal data model with lowest classification error is selected and used for classification of the test data samples.
Iii Experiments and Results
The images from tongue and eye data sets are analyzed separately. Since the data sets contain 3 classes of data samples, 2-step hierarchical classification is performed  for data separability analysis. In the first hierarchical step, the normal images are separated form the abnormal ones (class 0 vs. class 1, 2) or images with anemia are separated from the non-anemic ones (class 1 vs. class 0, 2). In the second hierarchical step, the remaining class samples are separated, i.e., (class 1 vs. class 2) or (class 0 vs. class 2), respectively.
Three categories of experiments are performed to identify the most discriminating set of intensity-based, spatial, and color-based features useful for classification of pallor severity grade. First, the intensity-based features extracted per image using model are subjected to feature ranking followed by double cross-validation  to identify the optimal set of intensity-based features. Second, the color-plane transformations applied in model are analyzed to identify the most significant spatial and color-planes. Third, the optimal intensity-based features are used for classification in model and the optimal color planes are used to classify the images using model .
Iii-a Intensity-based Feature Learning
The 54 intensity-based features extracted per pallor site image in model
are ranked using the F-score, Mutual Information and Chi-squared scoring methods and multi-class classification. We observe that for both the eye and tongue data sets, the 27 intensity-based features corresponding to the color planes, gradient and Frangi-filtered images from the conjunctiva region and the inner tongue regions, respectively, are optimal for classification of normal patients from abnormal ones. However, all the 54 intensity-based features corresponding to the conjunctiva and sclera regions in the eye and the inner and outer regions in the tongue are significant for classification of anemic images from abnormal ones. This observation is inline with the domain knowledge regarding the appearance of the conjunctiva in eye and inner tongue regions for identifying normal patients and analysis of all regions in the eye and tongue for further detection of abnormalities.
Iii-B Color-plane based Feature Learning
The 36 color and gradient planes extrapolated per pallor site image using model are analyzed for multi-class classification performances. For the eye data set with 27 images, [36x27=972 images] and for the tongue data set with 56 images, [36x56=2016 images] are subjected to classification. The rate of correct classification for each color and gradient plane image is analyzed to identify the most discriminating planes. We observe that for the eye data set, images () and () result in the maximum classification accuracy of 56%. For tongue images, lightness and a-channel planes () achieve maximum classification accuracy of 65%.
Iii-C Classification Performance Analysis
For the eye data set, model with AGMIC flow is implemented with 27 intensity-based features from conjunctiva region for step-1 of hierarchical classification followed by 54 intensity-based features from sclera and conjunctiva regions for step-2 of hierarchical classification, respectively. The classification performance of models and on the eye images are shown in Table I. Here, we observe that the model implemented with decision forest data model has the best image classification performance.
For the tongue data set, model with AGMIC flow is implemented with 27 intensity-based features from inner tongue region for step-1 of hierarchical classification followed by 54 intensity-based features from inner and outer tongue regions for step-2 of hierarchical classification, respectively. The classification performance of models and on the tongue images are shown in Table II. Here, we observe that the model implemented with boosted decision trees data model has the best screening performances.
Iv Conclusions and Discussion
In this work, we present a variety of image-based feature extraction, segmentation and data modeling approaches for classification and screening of anemia-like pallor using focused facial pallor site images. We perform three categories of experiments on eye and tongue pallor site images that are acquired from the public domain. The first category of experiments demonstrates that image intensity-based features corresponding to some specific spatial ROIs are significant for separating normal images from abnormal ones that must be further analyzed by specialists. Here, we find that the conjunctiva region in the eye and the inner tongue regions are significant for identification of normal images and abnormal images from eye and tongue pallor site images, respectively. The second category of experiments detects the most discriminating color and gradient plane-transformed images that are significant for classification of image-based pallor. This experiment demonstrates that Frangi-filtered hue and saturation color planes and first-order gradient filtered luminance channel planes are most significant for pallor classification using eye and tongue images, respectively. Our analysis leads to detection of intensity-based features from conjunctiva region in the hue and saturation color planes superimposed with Frangi-filtered edges for optimal separation of normal images from anemic or abnormal images using the eye pallor site images. Also, intensity-based features from the inner tongue regions in the luminance color planes superimposed with gradient filtered edges are significant for classification of abnormal images from normal and anemic ones using the tongue pallor site images. In the third category of experiments, we observe that the image segmentation and classification results in 86% screening accuracy for eye images while color-transformations and gradient filtering leads to 98% screening accuracy for tongue images. Thus, the proposed system is capable of severity screening for anemia using facial pallor site images in under 20 seconds of computation time per image.
Future works will be directed towards analysis of additional data sets acquired under controlled imaging conditions. Since the data sets under analysis in this work represent a huge variety of imaging condition variabilities, the observations from the experimental analysis are more generalizable yet limited in classification capabilities. Future efforts will be directed towards correlation of the automated pallor severity grade obtained from the facial pallor site images with respect to the actual patient hemoglobin count for pre-clinical evaluations.
-  G. A. Stevens, M. M. Finucane, L. M. De-Regil, C. J. Paciorek, S. R. Flaxman, F. Branca, J. P. Peña-Rosas, Z. A. Bhutta, M. Ezzati, N. I. M. S. Group et al., “Global, regional, and national trends in haemoglobin concentration and prevalence of total and severe anaemia in children and pregnant and non-pregnant women for 1995-2011: a systematic analysis of population-representative data,” The Lancet Global Health, vol. 1, no. 1, pp. e16–e25, 2013.
-  C. for Disease Control and Prevention, “Emergency department visits.” [Online]. Available: http://www.cdc.gov/nchs/fastats/emergency-department.htm
-  S. Roychowdhury and M. Emmons, “A survey of the trends in facial and expression recognition databases and methods,” arXiv preprint arXiv:1511.02407, 2015.
-  S. S. Yalçin, S. Ünal, F. Gümrük, and K. Yurdakök, “The validity of pallor as a clinical sign of anemia in cases with beta-thalassemia,” The Turkish journal of pediatrics, vol. 49, no. 4, p. 408, 2007.
-  V. Cherkassky and F. Mullier, “Learning from data,” John Wiley and sons, New York, 1998.
-  S. Roychowdhury and M. Bihis, “Ag-mic: Azure-based generalized flow for medical image classification,” IEEE Access, vol. PP, no. 99, pp. 1–14, 2016.
-  S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi, “Automated detection of neovascularization for proliferative diabetic retinopathy screening,” in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Aug 2016, pp. 1300–1303.
-  S. Roychowdhury, D. Koozekanani, and K. Parhi, “Dream: Diabetic retinopathy analysis using machine learning,” IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 5, pp. 1717–1728, 2014.