SVM-based Deep Stacking Networks

02/15/2019
by   Jingyuan Wang, et al.
0

The deep network model, with the majority built on neural networks, has been proved to be a powerful framework to represent complex data for high performance machine learning. In recent years, more and more studies turn to nonneural network approaches to build diverse deep structures, and the Deep Stacking Network (DSN) model is one of such approaches that uses stacked easy-to-learn blocks to build a parameter-training-parallelizable deep network. In this paper, we propose a novel SVM-based Deep Stacking Network (SVM-DSN), which uses the DSN architecture to organize linear SVM classifiers for deep learning. A BP-like layer tuning scheme is also proposed to ensure holistic and local optimizations of stacked SVMs simultaneously. Some good math properties of SVM, such as the convex optimization, is introduced into the DSN framework by our model. From a global view, SVM-DSN can iteratively extract data representations layer by layer as a deep neural network but with parallelizability, and from a local view, each stacked SVM can converge to its optimal solution and obtain the support vectors, which compared with neural networks could lead to interesting improvements in anti-saturation and interpretability. Experimental results on both image and text data sets demonstrate the excellent performances of SVM-DSN compared with some competitive benchmark models.

READ FULL TEXT
research
05/05/2011

Rapid Feature Learning with Stacked Linear Denoisers

We investigate unsupervised pre-training of deep architectures as featur...
research
11/07/2016

Trusting SVM for Piecewise Linear CNNs

We present a novel layerwise optimization algorithm for the learning obj...
research
02/11/2020

Convolutional Support Vector Machine

The support vector machine (SVM) and deep learning (e.g., convolutional ...
research
06/18/2021

Recurrent Stacking of Layers in Neural Networks: An Application to Neural Machine Translation

In deep neural network modeling, the most common practice is to stack a ...
research
01/18/2019

DeepOtsu: Document Enhancement and Binarization using Iterative Deep Learning

This paper presents a novel iterative deep learning framework and apply ...
research
02/15/2021

A generalized quadratic loss for SVM and Deep Neural Networks

We consider some supervised binary classification tasks and a regression...
research
10/14/2021

Training Neural Networks for Solving 1-D Optimal Piecewise Linear Approximation

Recently, the interpretability of deep learning has attracted a lot of a...

Please sign up or login with your details

Forgot password? Click here to reset