Deep convolutional neural networks (CNNs) trained with logistic or softmax losses (LGL and SML respectively for brevity), e.g., logistic or softmax layer followed by cross-entropy loss, have achieved remarkable success in various visual recognition tasks[17, 16, 12, 25, 27]. The success mainly accredits to CNN’s merit of high-level feature learning and loss function’s differentiability and simplicity for optimization. When training data exhibit class imbalances, training CNNs with gradient descent is biased towards learning majority classes in the conventional (unweighted) loss, resulting in performance degradation for minority classes. To remedy this issue, the class-wise reweighted loss is often used to emphasize the minority classes that can boost the predictive performance without introducing much additional difficulty in model training [6, 14, 20, 28]. A typical choice of weights for each class is the inverse-class frequency.
A natural question then to ask is what roles are those class-wise weights playing in CNN training using LGL or SML that lead to performance gain? Intuitively, those weights make tradeoffs on the predictive performance among different classes. In this paper, we answer this question quantitatively in a set of equations that tradeoffs are on the model predicted probabilities produced by the CNN models. Surprisingly, effectiveness of the reweighting mechanism for LGL is rather different from SML. Here, we view the conventional (e.g., no reweighting) LGL or SML as a special case where all classes are weighted equally.
As these tradeoffs are related to the logistic and softmax losses, answering the above question actually leads us to answering a more fundamental question about their learning behavior: what is the property that the decision boundary must satisfy when models are trained? To our best knowledge, this question has not been investigated systematically, despite logistic and softmax losses are extensively exploited in deep leaning community.
While SML can be viewed as a multi-class extension of LGL for binary classification, LGL is a different learning objective when used in multi-class classification . From the perspective of learning structure of data manifold as pointed out in [1, 2, 7], SML treats all class labels equally and poses a competition between true and other class labels for each training sample, which may distort data manifold; for LGL, the one-vs.-all approach it takes avoids this limitation as it models each target class independently, which may better capture the in-class structure of data. Though LGL enjoys such merits, it is rarely adopted in existing CNN models. The property that LGL and SML decision boundaries must satisfy further reveals the difference between LGL and SML (see Eq. (9), (10) with analysis). If used for the multi-class classification problem, we can identify two issues for LGL. Compared with SML, LGL may introduce data imbalance, which can degrade model performance as sample size plays an important role in determining decision boundaries. More importantly, since the one-vs.-all approach in LGL treats all other classes as the negative class, which is of a multi-modal distribution [19, 18], the averaging effect of the predicted probabilities of LGL can hinder learning discriminative feature representations to other classes that share some similarities with the target class.
Our contribution can be summarized as follows:
We provide a theoretical derivation on the relation among sample’s predicted probability (once CNN is trained), class weights in the loss function and sample size in a system of equations. Those equations explaining the reweighting mechanism are different in effect for LGL and SML.
We depict the learning property for LGL and SML for classification problems based on those probability equations. Under mild conditions, the expectation of model predicted probabilities must maintain a relation specified in Eq (9).
We identify that the multi-modality neglect problem in LGL is the main obstacle for LGL in multi-class classification. To remedy this problem, we propose a novel learning objective, in-negative class reweighted LGL, as a competitive alternative for LGL and SML.
We conduct experiments on several benchmark datasets to demonstrate the effectiveness of our method.
With recent explosion in computational power and availability of large scale image datasets, deep learning models have repeatedly made breakthroughs in a wide spectrum of tasks in computer vision[17, 9]. Those advancements include new CNN architectures for image classification[16, 12, 25, 27], objective detection and segmentation [23, 24], new loss functions [7, 30] and effective training techniques to improve CNN performance [26, 15].
In those supervised learning problems, CNNs are mostly trained with loss functions such as LGL and SML. In practice, class imbalance naturally emerges in real-world data and training CNN models directly on those datasets may lead to poor performance. This phenomenon is referred as the imbalanced learning problem. To tackle this problem, cost-sensitive method [8, 31]
is the widely-adopted approach in current training practices as they don’t introduce any obstacles in the backpropagation algorithm. One of the most popular methods is class-wise reweighting loss function based on LGL and SML. For example,[14, 28] reweight each class by its inverse-class frequency. In some long-tailed datasets, a smoothed version of weights is adopted [20, 21], which emphasizes less on minority classes, such as the square root of inverse-class frequency. More recently,  proposed a weighting strategy based on the calculation of effective sample size. In the context of learning from noisy data,  provides analysis on the weighted SGL showing close connection to the mean absolute error (MAE) loss. However, what role class-wise weights play in LGL and SML is not explained in previous works. In this paper, we provide a theoretical explication on how the weights control the tradeoffs among model predictions.
If we decompose the multi-class classification as multiple binary classification sub-tasks, LGL can also be used as the objective function via one-vs.-all approach [10, 2], which is however rarely adopted in existing works of deep learning. Motivated to understand class-wise reweighted LGL and SML, our analysis further leads us to a more profound discovery in the properties of decision boundaries for LGL and SML. Previous work in  showed that the learning objective using LGL is quite different from SML as each class is learned independently. They identified the negative class distraction (NCD) phenomenon that might be detrimental to model performance when using LGL in multi-class classification. From our analysis, the NCD problem can be partially explained that LGL treats the negative class (e.g., non-target classes) as a single class and ignores its multi-modality. If there exists one non-target class that share some similarity with the target class, CNN trained with LGL may make less confident predictions for that non-target class (e.g., probability of belonging to the negative class is small) as its predicted probabilities are averaged out due to other non-target classes with confident predictions. Consequently, samples from that specific non-target class can be misclassified into the target class, resulting in large predictive error.
Analysis on LGL and SML
In this section, we provide a theoretical explanation for the class-wise weighting mechanism and depict the learning property of LGL and SML losses.
Notation Let be the set of training samples of size , where is the
-dimensional feature vector andis the true class label, and the subset of for the -th class. The bold
is used to represent the one-hot encoding for: if , otherwise. is used to represent sample size for the -th class and hence . The maximum size is denoted as .
For classification problem, the probability for a sample belonging to one class is modeled by logistic (e.g., sigmoid) for binary classification
and by softmax for multi-class classification
’s are the logits formodeled by CNN with parameter vector . It is worth noting that softmax is equivalent to logistic in binary classification as can be seen from
Hence, without loss of generality, we write class-wise reweighted LGL () and SML () in a unified form as follows
where each is the CNN predicted probability of sample belonging to the -th class; s are weight parameters to control each class’s contribution in the loss. When all s are equal, is the conventional cross-entropy loss and minimizing it is equivalent to maximizing likelihood. If the training data are imbalanced, a different setup of s is used, usually classes with smaller sizes are assigned with higher weights. Generally,
s are treated as hyperparameters and selected by cross-validation.
We emphasize here that using logistic function for multi-class () is a different learning objective from softmax in this case as the classification problem is essentially reformulated as binary classification sub-problems.
Key Equations for Weights s
Assume that CNN’s output layer, after convolutional layers, is a fully connected layer of neurons with bias terms, then the predicted probability for sample is given by the softmax activation:
where is the feature representation of extracted from convolutional layers, and are parameters of the -th neuron in the output layer. For notational simplicity, we have dropped in .
After CNN is trained, we assume that the reweighted SML is minimized to local optimum . By optimization theory, a necessary condition is that the gradient of is zero at 111More strictly, zero is in the subgradient of at . But this doesn’t affect the following analysis.:
We specifically consider for the -st class with respect to one component of
. Then with chain rule, the necessary condition above gives:
where we use given by Eq. (2).
Let be the softmax function of with each component , its derivative is
With the same calculations, we can obtain other similar equations, each of which corresponds to one class. Remember is the probability of sample from the -th class being predicted into the -th class, and Eq. (8) reveals the quantitative relation between weights s, model predicted probabilities and training samples. Notice that CNN is often trained with regularization to prevent overfitting. If the bias term s are not penalized, Eq. (8) still holds valid. Another possible issue is that the calculation relies on the use of bias terms in the output layer. As using bias increases CNN’s flexibility and is not harmful to CNN performance, our analysis is still applicable to a wide range of CNN models trained with cross-entropy loss.
We observe in Eq. (8), (approximately) represents the expected probability of CNN incorrectly predicting a sample of class and the expected probability of CNN misclassifying a sample of class into class . If we assume that the training data can well represent the true data distribution that testing data also follow, the learning property of trained CNN shown in Eq. (8) can be generalized to testing data.
More specifically, since the CNN model is a continuous mapping and the softmax output is bounded between 0 and 1, by the uniform law of large numbers, we have the following system of equations once CNN is trained:
where for indices and , represents the expected probability of CNN predicting a sample from class into class :
where is the true data distribution for the -th class.
Binary Case with LGL For binary classification problem (), Eq. (9) gives us the following relation about CNN predicted probabilities:
In the conventional LGL where each class is weighted equally (), Eq. (10) becomes . If data exhibit severe imbalance, say , then we must have ()
If is the decision making threshold, this implies that the trained neural network can correctly predict a majority class (e.g., class 0) sample, confidently (at least) with probability 0.9, on average. However, for minority class, the predictive performance is more complex which depends on the trained model and data distribution. For example, if two classes can be well separated and the model made very confident predictions, say , then we must have for the minority class, implying a good predictive performance on class 1. If , then we have
. This means the predicted probability of a minority sample being minority is 0.2 on average. Hence, the classifier must misclassify most minority samples (), resulting in very poor predictive accuracy for minority class.
If LGL is reweighted using inverse-class frequencies, and , the equation above is equivalent to . Since predictions are made by and means , we can have a deterministic relation: if either class 0 or 1 can be well predicted (e.g., ), reweighting by class inverse frequencies can guarantee performance improvement for the minority class. However, the extent of “goodness” depends on the separability of the underlying data distributions of the two classes.
Simulations for Eq. (10) We conduct simulations under two settings for checking Eq. (10). The imbalance ratio is set to 10 in training data (), testing data size is ; both training and testing data follow the same data distribution. As the property only relies on the last fully connected hidden layer, we use the following setup:
Sim2: , , where , , , . A one-hidden-layer forward neural network of layer size with sigmoid activation.
Simulation results (along with standard deviation) for Eq. (10) over 100 runs, . RHS represents theoretical value on the right-hand side of (10); LHS the simulated value on the left hand side.
Multi-class Case with SML Because and Eq. (9) has variables with only equations, we can’t exactly solve it quantitatively for a relation among those ’s when . For the special case when weights are chosen as the inverse-class frequencies , considering for class 1, we have . Multi-class classification does not have a deterministic relation as in the binary case, as predictions are made by and we don’t have a decisive threshold for decision making (like the 0.5 in binary case). Our findings match the results in  in the sense that class-wise reweighting for multi-class is indeterministic. However, our results are solely based on the mathematical property of the backpropagation algorithm from optimization theory whereas  is based on decision theory.
Learning property of LGL and SML As the class-wise reweighting mechanism is explained in Eq. (9
), those equations also reveal the property of decision boundaries for LGL and SML. For comparison, the decision boundary of support vector machine (SVM) is determined by those support vectors that maximize the margin and those samples with larger margin have no effects on the position of decision boundary. On the contrary, all samples have their contribution to the decision boundary in LGL and SML so that their averaged probabilities that the model produces must satisfy Eq. (9). In particular for the binary case, we can see that if classes are balanced, the model must make correct predictions with equal confidence for the positive and negative classes, on average; whereas for imbalanced data, the decision boundary will be pushed towards the minority class in a position with Eq (10
) always maintained. Another observation is that if the expectation of model predicted probabilities doesn’t match with its mode (e.g skewed distribution), the magnitude of tradeoff between performance of the majority and minority class depends on the direction of skewness. If the distribution of the majority class skews away from the decision boundary, upweighting minority class will boost model performance at a small cost of performance degradation for the majority class than if it skews towards the decision boundary. This implies that estimating the shape of data distribution in the latent feature space and choosing the weights accordingly would be very helpful to improve model overall performance.
In-negative Class Reweighted LGL
In this section, we focused on LGL for multi-class classification via one-vs.-all approach. In addition to the theoretical merits of LGL mentioned in the introduction section that LGL is capable of better capturing the structure of data manifold than SML, the guarantee of achieving good performance after properly reweighting (e.g., Eg.(10)) is also desirable as the one-vs.-all approach naturally introduces data imbalance issue.
Multi-modality Neglect Problem In spite of those merits of LGL, it also introduces the multi-modality neglect problem for multi-class classification. Since the expectation of model predicted probability must satisfy Eq (10) for LGL, the averaging effect might be harmful for model performance. In the one-vs.-all approach, the negative class consists of all the remaining non-target classes, which follows a multi-modal distribution (one modality for each non-target class). LGL treats all non-target classes equally in the learning process. If there is a hard non-target class that shares non-trivial similarity with the target class, its contribution in LGL might be averaged out by other easy non-target classes. In other words, those easy non-target classes (e.g., correctly predicted as the negative class with high probabilities) would compensate the predicted probability of the hard non-target class so that the probabilistic relation in Eq (10) is maintained. Consequently, model could incorrectly predict samples from the hard non-target class into the target class, inducing large predictive error for that class. This phenomenon is not desirable as we want LGL to pay more attention on the separation of the target-class with that hard class, meanwhile maintain the separation from the remaining easy non-target classes.
To this end, we propose an improved version of LGL to reweight each non-target class’s contribution within the negative class. Specifically, for the target class (e.g., positive class, labeled as ) and all non-target classes (e.g., negative class, labeled as ), a two-level reweighting mechanism is applied in LGL, which we term as in-negative-class reweighted LGL (LGL-INR):
where is the predicted probability of sample belonging to the positive class and is the weight for class as a sub-class of the negative class.
The first reweighting is at the level of positive vs. negative class. If we require , using inverse-frequencies will maintain the balance between the positive and negative class, as one-vs.-all is likely to introduce class imbalance. The second level of reweighting is within the negative class: we upweight the contribution of a hard sub-class by assigning a larger , making LGL-IGR focus more on the learning for that class.
Choice of s When there are a large number of classes, treating all s as hyperparameters and selecting the optimal values are not feasible in practice as we generally don’t have the prior knowledge about which classes are hard. Instead, we adopt a strategy that assigns the weights during the training process. For each non-target class , let be the subset of in the mini-batch, we use the mean predicted probability
as the class-level hardness measurement. A larger implies class is harder to separate from the target class . We then transform those ’s using softmax to get :
where is the temperature that can smooth () or sharpen () each non-target class’s contribution . LGL-INR adaptively shifts its learning focus to those hard classes, meanwhile keep attentive on those easy classes. Note that this strategy only introduces one extra parameter in LGL-INR.
With the competition mechanism imposed by , LGL-INR can be viewed as a smoothed learning objective between the one-vs.-one and one-vs.-all approach: when , , all non-target classes are weighted equally, which is the in-negative-class balanced LGL using inverse-class frequencies; when is very large, concentrates on the hardest class (e.g., ) and LGL-INR approximately performs one-vs.-one classification. We don’t specifically fine-tune the optimal value of and works well in our experiments.
We evaluate LGL-INR on several benchmark datasets for image classification. Note that in our experiments, applying LGL in multi-class classification naturally introduces data imbalance which is handled in our LGL-INR formulation. Our primary goal here is to demonstrate that LGL-INR can be used as a drop-in replacement for LGL and SML with competitive or even better performance, rather than outperform the existing best models using extra training techniques. For fair comparison, all loss functions are evaluated in the same test setting. Code is made publicly available at https://github.com/Dichoto/LGL-INR.
Dataset We perform experiments on four MNIST-type datasets, MNIST, Fashion-MNIST (FMNIST) , Kuzushiji-MNIST (KMNIST)  and CIFAR10. FMNIST and KMNIST are intended as drop-in replacements for MNIST which are harder than MNIST. Both datasets are gray-scale images consisting of 10 classes of clothing and Japanese character respectively. CIFAR10 consists of colored images of size from 10 objects.
Model setup We test three loss functions on each dataset with different CNN architectures. For MNIST-type datasets, two CNNs with simple configurations are used. The first one (CNN2C) has two convolution layers and the other one (CNN5C) has 5 convolution layers with batch normalization . For CIFAR10, we use MobilenetV2  and Resnet-18  with publicly available implementations.
All models are trained with the standard stochastic gradient descent (SGD) algorithm. The training setups are as follows. For MNIST-type data, the learning rate is set to 0.01, the momentum is 0.5, batch size 64, number of epoch is 20. We don’t perform any data augmentation. For CIFAR data, we train the models with 100 epochs and set batch size to 64. The initial learning rate is set to 0.1, and divide it by 10 at 50-th and 75-th epoch. The weight decay isand the momentum in SGD is 0.9. Data augmentation includes random crop and horizontal flip. We train all models without pretraining on large-scale image data. Model performance is evaluated by the top-1 accuracy rate and we report this metric on the testing data from the standard train/test split of those datasets for fair performance evaluation. For LGL-INR, we report the results using .
Table 3 and Table 4 shows the classification accuracy using LGL, SML and LGL-INR on the MNIST-type and CIFAR10 dataset respectively. From the table, we can observe that for all three loss functions, model with larger capacity yields higher accuracy. On MNIST-type data, LGL yields overall poorer performance than SML. This is because in those datasets, some classes are very similar to each other (like shirt vs. coat in FMNIST) and the negative class consists of 9 different sub-classes. Hence the learning focus of LGL may get distracted from the hard sub-classes due to the averaging behavior of LGL as shown in Eq (9). However, SML doesn’t suffer this problem as all negative sub-classes are treated equally. On CIFAR10, LGL achieves better accuracy than SML. This is possibly due to the lack of very similar classes as in MNIST-type data. This observation demonstrates LGL’s potential as a competitive alternative to SML in some classification tasks.
On the other hand, LGL-INR adaptively pays more attention on the hard classes while keeps its separation from easy classes. This enables LGL-IRN to outperform LGL and SML notably. Comparing LGL-IRN with LGL, we see that the multi-modality neglect problem deteriorates LGL’s ability of learning discriminative features representation, which can be relieved by the in-negative class reweighing mechanism; comparing LGL-IRN with SML, focusing on learning hard classes (not restricted to classes similar to the target class) is beneficial. Also, the adaptive weight assignment in the training process doesn’t require extra effort on the weight selection, making our method widely applicable.
We check the predictive behavior of LGL-INR in detail by looking at the confusion matrix on testing data. Here, we use CNN2C and KMNIST dataset as an example. Fig. 1 show the results. We observe that for LGL, Class 1 and 2 have the lowest accuracy among 10 classes. By shifting LGL’s learning focus on hard classes, LGL-INR significantly improves model performance on class 1 and 2. This is within our expectation backed by the theoretical depiction of LGL’s learning property. SML does not have the multi-modality neglect problem as each class is treated equally in the learning process, yet it also does not pay more attention to the hard classes. This makes LGL-INR advantageous: LGL-INR outperforms SML on 9 classes out of 10. For example, class 0 have 18 samples misclassified into class 4 whereas only 6 are misclassified in LGL-INR.
Figure 2 displays the training accuracy curve for LGL, SML and LGL-INR on FMNIST and KMNIST. Under the same training protocol, LGL-INR achieves slightly faster convergence rate than SML and LGL with comparative (FMNIST) or better (KMNIST) performance, implying that focusing on learning hard classes may facilitate model training process.
We also check the sensitivity of the temperature parameter in LGL-INR weighting mechanism. Mathematically, a large or small value for is not desirable as the LGL-INR is reduced to an approximate one-vs.-one or a class-balanced learning objective. We test on KMNIST. As shown in Table 5 and Fig. 2, model performance is not sensitive to in this range, making LGL-INR a competitive alternative to LGL or SML without introducing much hyper-parameter tuning.
In this paper, motivated to explain the class-wise reweighting mechanism in LGL and SML, we theoretically deprived a system of probability equations that depicts the learning property of LGL and SML, as well as explains the roles of those class-wise weights in the loss function. By examining the difference in the effects of the weight mechanism on LGL and SML, we identify the multi-modality neglect problem is the major obstacle that can negatively affect LGL’s performance in multi-class classification. We remedy this shortcoming of LGL with a in-negative-class reweighting mechanism. The proposed method shows its effectiveness on several benchmark image datasets. For future works, we plan to incorporate the estimation of data distribution and use the reweighting mechanism of LGL-INR at the sample level in the model training process to further improve the efficacy of the reweighting mechanism.
This work is supported by the National Science Foundation under grant no. IIS-1724227.
Manifold regularization: a geometric framework for learning from labeled and unlabeled examples.
Journal of machine learning research7 (Nov), pp. 2399–2434. Cited by: Introduction.
-  (2006) Pattern recognition and machine learning. springer. Cited by: Introduction, Related Work.
-  (2015) Attention-based models for speech recognition. In Advances in neural information processing systems, pp. 577–585. Cited by: In-negative Class Reweighted LGL.
-  (2018) Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718. Cited by: Experiment Setup.
-  (1995) Support-vector networks. Machine learning 20 (3), pp. 273–297. Cited by: Key Equations for Weights s.
-  (2019) Class-balanced loss based on effective number of samples. arXiv preprint arXiv:1901.05555. Cited by: Introduction, Related Work.
Single-label multi-class image classification by deep logistic regression.
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 3486–3493. Cited by: Introduction, Related Work, Related Work.
-  (2001) The foundations of cost-sensitive learning. In International joint conference on artificial intelligence, Vol. 17, pp. 973–978. Cited by: Related Work.
-  (2016) Deep learning. MIT press. Cited by: Related Work.
-  (2005) The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer 27 (2), pp. 83–85. Cited by: Related Work.
-  (2008) Learning from imbalanced data. IEEE Transactions on Knowledge & Data Engineering 9, pp. 1263–1284. Cited by: Related Work.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Introduction, Related Work, Experiment Setup.
-  (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: Experiment Setup.
-  (2016) Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5375–5384. Cited by: Introduction, Related Work.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: Related Work, Experiment Setup.
-  (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: Introduction, Related Work.
-  (2015) Deep learning. nature 521 (7553), pp. 436. Cited by: Introduction, Related Work.
-  (2018) Multinomial classification with class-conditional overlapping sparse feature groups. Pattern Recognition Letters 101, pp. 37–43. Cited by: Introduction.
Robust feature selection via l2, 1-norm in finite mixture of regression. Pattern Recognition Letters 108, pp. 15–22. Cited by: Introduction.
-  (2018) Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 181–196. Cited by: Introduction, Related Work.
-  (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: Related Work.
-  (1994) Large sample estimation and hypothesis testing. Handbook of econometrics 4, pp. 2111–2245. Cited by: Key Equations for Weights s.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: Related Work.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: Related Work.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: Introduction, Related Work.
-  (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: Related Work.
-  (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: Introduction, Related Work.
-  (2017) Learning to model the tail. In Advances in Neural Information Processing Systems, pp. 7029–7039. Cited by: Introduction, Related Work.
-  (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: Experiment Setup.
-  (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in neural information processing systems, pp. 8778–8788. Cited by: Related Work, Related Work.
-  (2010) On multi-class cost-sensitive learning. Computational Intelligence 26 (3), pp. 232–257. Cited by: Related Work, Key Equations for Weights s.