Nowadays, cancer is the second leading cause of death in the U.S based on IARC report and it would replace heart disease as the main cause of death 
. Breast cancer is considered as one of the deadliest diseases with a high fatality rate among women worldwide. Early prognosis is the most effective way to minimize the physical and psychological side effects of prolonged treatments. Machine Learning methods are effective ways for early diagnosis and survival prediction. Over the past decades, large amounts of cancer data have been collected and are available to the Machine Learning and Data Mining community. In parallel, a wide range of supervised learning methods have been applied on collected datasets [16, 13, 7, 2, 12]
. However, due to vital impacts of correct diagnosis in cancer treatments, the accurate diagnosis and efficient survival prediction is still one of the most challenging tasks for researchers. In this paper, we propose a novel type of perceptron called L-Perceptron which outperforms all the previous supervised learning methods by reaching 97.42 % and 98.73 % in terms of accuracy and sensitivity, respectively in Wisconsin Breast Cancer dataset. Experimental results on Haberman’s Cancer Survival dataset, show the superiority of proposed method by reaching 75.18 % and 83.86 % in terms of accuracy and F1 score, respectively. L-Perceptron has been devised by combination of least square classifiers and traditional perceptron ideas. In L-perceptron, givenand as values, a mathematical function is fitted per feature same as least squares classification. What’s trained during update rule is characteristic of each function per feature. For example, in case of polynomials, the best possible polynomial order is trained per feature. This procedure helps L-Perceptron to be not only non-linear but very flexible to handle overfitting problem. The rest of this paper is organized as follows. Section 2 reviews some of the most important researches in breast cancer diagnosis and survival prediction. In section 3, we propose a novel type of perceptron called L-Perceptron. Section 4 reports experimental results on Wisconsin Breast Cancer and Haberman’s Breast Cancer Survival datasets. Finally, section 5 concludes the paper.
2 Related Work
In this section, we provide a review on several studies which have been conducted on breast cancer diagnosis and survival prediction. These studies have focused on different approaches to the given problem and achieved high classification accuracies.  proposed an interactive evaluation - diagnosis computer system based on cytologic features. Vikas Chaurasia and Saurabh Pal 
compared the performance criterion of supervised learning classifiers, such as Naive Bayes, SVM-RBF kernel, RBF neural networks, Decision Tree (DT) (J48), and simple classification and regression tree (CART), to find the best classifier in breast cancer datasets. The experimental result shows that SVM-RBF kernel is more accurate than other classifiers since it scores at the accuracy level of 96.84 % in the Wisconsin Breast Cancer (original) dataset. Asri et al.
compared the performance of C4.5, Naive Bayes, Support Vector Machine (SVM) and K- Nearest Neighbor (K-NN) to find the best classifier in Wisconsin Breast Cancer (original) showing that SVM proves to be the most accurate classifier with accuracy of 97.13 %. Vikas Chaurasia and Saurabh Pal used three popular data mining algorithms (Naive Bayes, RBF Network, J48) to develop the prediction models using the Wisconsin Breast Cancer (original). The obtained results indicated that the Naive Bayes performed the best with a classification accuracy of 97.36 % and RBF Network came out to be the second best with a classification accuracy of 96.77 %, and the J48 came out to be the third with a classification accuracy of 93.41 %.  used an evolutionary artificial neural network (EANN) approach based on the pareto-differential evolution (PDE) algorithm augmented with local search for the prediction of breast cancer. The approach is named Memetic Pareto Artificial Neural Network (MPANN). Since the early dates of the researches in the field of computational biomedicine, the cancer survivability prediction has been a challenging problem for many researchers [9, 3]. In 
, artificial neural networks and decision trees along with logistic regression were used to develop the prediction models using 202,932 breast cancer patients records, which then pre-classified into two groups of “survived” (93,273) and “not survived” (109,659). The results of predicting the survivability were in the range of 93 % accuracy.
used a fully complex valued fast learning artificial neural network with GD activation function in the hidden layer. The comparison results showed that, FC-FLC provides a better classification performance comparing to the SRAN, MCFIS and ELM classifiers. Liu et al. used the under-sampling C5 technique and bagging algorithm to deal with the imbalanced problem predictive models for breast cancer survivability.
3 Proposed Method
In this section, we propose a novel type of perceptron called L-Perceptron which despite its simplicity, it can outperform traditional supervised learning methods in breast cancer diagnosis and survival prediction.
Given a set of training data in dimensional space , a set of corresponding labels , and
as two hyperparameters,is created as follows.
Where, and can be manipulated by the user to get the best possible result. Training phase of a L-Perceptron starts by fitting function by minimizing , given per each feature as follows.
After finding the best fitting function per each feature test phase is formulized as follows.
Where, is fitting function of feature. It means that instead of dot product of weights and features the features are passed to their corresponding function and a summation of functions outputs is passed to a step function or activation function to predict the label of input test data. The diagram of L-Perceptron is shown in figure1.Before starting the training phase, the fitting function type must be defined which can be selected among all possible mathematical functions like logarithmic, exponential, polynomial, etc. What’s trained during the update rule is characteristic of the fitting function. For example, in case of polynomials the best possible polynomial order is trained per feature. This lets the model to meet each feature set complexity separately.
3.2 Update Rule
Suppose polynomials are used as fitting function. In this case, the order of each fitting polynomial is trained during the update rule. What’s happening during the update rule is to assign initial polynomial orders, fit a polynomial per feature given the and , calculate the error and repeat this procedure until the best possible orders are found at the end of update rule. To make the update rule faster the upper bound and lower bound of polynomial orders can be limited to some specific range. The upper bound and lower bound range can be defined as hyperparameters as described in implementation section. This restriction adds more flexibility to the L-Perceptron to avoid overfitting. Update rule of L-Perceptron is as follows.
1. Initialize all degrees to 1 and
2. Repeat until no significant change seen in
3. For all Features
5. Compute the
6. if ()
Figure 2 shows the schematic of L-Perceptron, in case of using polynomials as fitting function.
In this section, we compare the accuracy results of proposed method versus a set of traditional supervised learning methods by testing on WBCD and HSD.
In this section, we provide a brief introduction to datasets used in the experiments including Wisconsin Breast Cancer dataset (original) and Haberman’s Breast Cancer Survival dataset.
4.1.1 Wisconsin Breast Cancer dataset
The breast cancer dataset is a classic binary classification dataset. The data is accessible from the UC Irvine Machine Learning repository . This dataset has 699 instances, two classes (malignant and benign), and 9 integer-valued attributes. It contains 16 instances with missing values, but we didn’t remove them from the dataset. It’s consisted of benign: 458 (65.5%), malignant: 241 (34.5%) instances.
|1||Sample code number|
|3||Uniformity of cell size|
|4||Uniformity of cell shape|
|6||Single epithelial cell size|
4.1.2 Haberman’s Breast Cancer Survival dataset
The Haberman’s Breast Cancer Survival dataset  contains of cases from a study that was conducted between 1958 and 1970 at the University of Chicago’s Billings Hospital on the survival of patients who had undergone surgery for breast cancer. This dataset has 306 instances, 2 classes and 3 integer-valued attributes. Each data point contains following features.
|1||Age of patient at time of operation|
|2||Patient’s year of operation (year - 1900)|
|3||Number of positive axillary nodes detected|
Output attribute is a survival status (class attribute) assigned for the output attribute is as follows: Patient lived for 5 years or longer =1. Patient death within 5 years =2.
We have developed a Python function called “lp.py” which is available for the public to test on classification problems . This function has been originally devised for numerical datasets. However, it can be used on categorical datasets if the features supposed as a set of discrete integer values. The input parameters of this functions are as follows.
lp(trainx, trainy, testx, testy, p1, p2, dlb, dub, ite, threshold)
Table1 shows implemented parameters and their descriptions.
|trainx, trainy||A set of training data and their corresponding labels|
|testx, testy||A set of test data and their corresponding labels|
|p1, p2||Y values for fitting functions as described in section 3|
|dlb, dub||Degree lower bound and degree upper bounds respectively to limit the degree range|
|ite||Number of iterations|
|threshold||Threshold of activation function to discriminate between the instances.|
4.1.4 Results and discussion
We used 10-fold cross validation method to measure the unbiased estimation for performance comparison purposes. The comparison results of different methods tested on Wisconsin Breast Cancer Dataset (WBCD) are presented in Table 2. Experimental results on Haberman’s Survival Dataset (HSD) have been tabulated on Table 3. Table 2 shows that L-Perceptron outperforms other methods in terms of accuracy and sensitivity based on experiments on WBCD. Naive Bayes is in the second rank by showing better results comparing to the rest of methods.
|Methods||Accuracy (%)||Sensitivity (%)||Specificity (%)||F1 Score (%)|
|Methods||Accuracy (%)||Sensitivity (%)||Specificity (%)||F1 Score (%)|
|Linear Discriminant Analysis||73.78||95.42||19.67||82.71|
Table 3 shows that L-Perceptron outperforms other methods in terms of accuracy and F1 score based on experiments on HSD. When it comes to compare the methods based on specificity, L-Perceptron is at the second rank after MLP which shows best specificity result among all tested methods. Table 4 shows the initialized parameters including , and fitting degree for each dataset.
|p1, p2||-2, 3||-1.3, 2.9|
|dlb, dub||4, 4||1, 1|
In this paper, we proposed a novel type of perceptron called L-Perceptron. The proposed method successfully tested on Wisconsin Breast Cancer Dataset and Haberman’s Breast Cancer Survival Dataset. We used 10-fold cross-validation method to measure the unbiased estimation for performance comparison purposes. The experimental results showed that L-Perceptron has the best performance comparing to previous methods in terms of accuracy and sensitivity based on experiments on WBCD. The proposed method reached 97.42 % of accuracy, 98.73 % of sensitivity which are the best performance results reported in the literature among the reported results without any preprocessing or feature selection. We also tested L-Perceptron on Haberman’s Breast Cancer Survival Dataset. The experimental results showed that L-Perceptron has the best performance comparing to previous methods in terms of accuracy and F1 score. The proposed method reached 75.18 % of accuracy, 83.86 % of F1 score which are the best reported performance results.
-  Asri H, Mousannif H, Al Moatassime H, Noel T. Big data in healthcare: Challenges and opportunities. 2015 Int Conf Cloud Technol Appl. 2015:1-7. doi:10.1109/CloudTech.2015.7337020.
-  Ayer T, Alagoz O, Chhatwal J, Shavlik JW, Kahn CE, Burnside ES. Breast cancer risk estimation with artificial neural networks revisited. Cancer 2010;116:3310–21.
-  Bellaachia A and Guven E. Predicting breast cancer survivability using data mining techniques. In: Scientific data mining workshop (in conjunction with the 2006 SIAM conference on data mining), April 20-22, 2006, Bethesda, Maryland.
-  C. Blake and C. Merz, “UCI repository of machine learning databases”, Department of Information and Computer Sciences, University of California, Irvine, [URL: http://archive.ics.uci.edu/ml/], 1998.
-  Chaurasia V and Pal S. Data mining techniques: to predict and resolve breast cancer survivability. Int J Comput Sci Mobile Comput 2014; 3: 10–22.
-  Chaurasia, Vikas, Saurabh Pal, and B. B. Tiwari. "Prediction of benign and malignant breast cancer using data mining techniques." Journal of Algorithms and Computational Technology 12, no. 2 (2018): 119-126.
-  Delen D, Walker G and Kadam A. Predicting breast cancer survivability: a comparison of three data mining methods. Artif Intell Med 2005; 34: 113–127.
-  Dumitrescu, R. G., and I. Cotarla. "Understanding breast cancer risk‐where do we stand in 2005." Journal of cellular and molecular medicine 9.1 (2005): 208-221.
-  Dursun D, Glenn W and Kadam A. Predicting breast cancer survivability: a comparison of three data mining methods. Artif Intell Med 2004; 34: 113–127.
Dursun, W. Glenn, K. Amit, 1 June 2005, Predicting breast cancer survivability: a comparison of three data mining methods Artificial intelligence in medicine (volume 34 issue 2 Pages 113-127.
-  Hussein A. Abbass, An Evolutionary Artificial Neural Networks Approach for Breast Cancer Diagnosis, School of Computer Science, University of New South Wales, Australian Defence Force Academy Campus.
-  Lavanya, Dr.K.Usha Rani,..,” Analysis of feature selection with classification: Breast cancer datasets”,Indian Journal of Computer Science and Engineering (IJCSE),October 2011.
-  Li J, Liu H, Ng S-K, et al. Discovery of significant rules for classifying cancer diagnosis data. Bioinformatics 2003; 19: ii93–ii102.
-  Liu Y-Q, Wang C and Zhang L. Decision tree based predictive models for breast cancer survivability on imbalanced data. In: 3rd international conference on bioinformatics and biomedical engineering, 11-13 June 2009, Beijing, China, 2009.
-  Sivachitra, M., and S. Vijayachitra. "Classification of post operative breast cancer patient information using complex valued neural classifiers." In Cognitive Computing and Information Processing (CCIP), 2015 International Conference on, pp. 1-4. IEEE, 2015.
-  Tan AC and Gilbert D. Ensemble machine learning on gene expression data for cancer classification. Appl Bioinformatics 2003; 2: S75–S83.
-  “UCI Machine Learning Repository: Breast Cancer Wisconsin (Original) Data Set.” [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+
-  Wolberg, William H., W. Nick Street, and O. L. Mangasarian. "Machine learning techniques to diagnose breast cancer from image-processed nuclear features of fine needle aspirates." Cancer letters 77.2-3 (1994): 163-171.
-  www.breastcancer.org/risk/factors
-  https://github.com/hadimansouorifar/L-Perceptron