RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for Supervised Learning

09/05/2023
by   Mushir Akhtar, et al.
0

In the domain of machine learning algorithms, the significance of the loss function is paramount, especially in supervised learning tasks. It serves as a fundamental pillar that profoundly influences the behavior and efficacy of supervised learning algorithms. Traditional loss functions, while widely used, often struggle to handle noisy and high-dimensional data, impede model interpretability, and lead to slow convergence during training. In this paper, we address the aforementioned constraints by proposing a novel robust, bounded, sparse, and smooth (RoBoSS) loss function for supervised learning. Further, we incorporate the RoBoSS loss function within the framework of support vector machine (SVM) and introduce a new robust algorithm named ℒ_rbss-SVM. For the theoretical analysis, the classification-calibrated property and generalization ability are also presented. These investigations are crucial for gaining deeper insights into the performance of the RoBoSS loss function in the classification tasks and its potential to generalize well to unseen data. To empirically demonstrate the effectiveness of the proposed ℒ_rbss-SVM, we evaluate it on 88 real-world UCI and KEEL datasets from diverse domains. Additionally, to exemplify the effectiveness of the proposed ℒ_rbss-SVM within the biomedical realm, we evaluated it on two medical datasets: the electroencephalogram (EEG) signal dataset and the breast cancer (BreaKHis) dataset. The numerical results substantiate the superiority of the proposed ℒ_rbss-SVM model, both in terms of its remarkable generalization performance and its efficiency in training time.

READ FULL TEXT
research
04/30/2012

A Conjugate Property between Loss Functions and Uncertainty Sets in Classification Problems

In binary classification problems, mainly two approaches have been propo...
research
02/09/2021

Learning a powerful SVM using piece-wise linear loss functions

In this paper, we have considered general k-piece-wise linear convex los...
research
04/05/2019

Logitron: Perceptron-augmented classification model based on an extended logistic loss function

Classification is the most important process in data analysis. However, ...
research
02/27/2021

Learning with Smooth Hinge Losses

Due to the non-smoothness of the Hinge loss in SVM, it is difficult to o...
research
05/16/2020

Classification vs regression in overparameterized regimes: Does the loss function matter?

We compare classification and regression tasks in the overparameterized ...
research
10/22/2012

Supervised Learning with Similarity Functions

We address the problem of general supervised learning when data can only...
research
05/24/2022

Soft-SVM Regression For Binary Classification

The binomial deviance and the SVM hinge loss functions are two of the mo...

Please sign up or login with your details

Forgot password? Click here to reset