SEGEN: Sample-Ensemble Genetic Evolutional Network Model

03/23/2018
by   Jiawei Zhang, et al.
0

Deep learning, a rebranding of deep neural network research works, has achieved remarkable success in recent years. With multiple hidden layers, deep learning models aim at computing hierarchical features or representations of the observational data. Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning efforts and the lack of result explainability, deep learning has also suffered from lots of criticism. In this paper, we will introduce a new representation learning model, namely "Sample-Ensemble Genetic Evolutional Network" (SEGEN), which can serve as an alternative approach to deep learning models. Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutional learning strategy to build a group of unit models generations by generations. The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much "smaller" and "shallower" architecture. The learning results of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning works, but has sound theoretic interpretability of the learning process and results. Extensive experiments have been done on real-world network structured datasets, and the experimental results obtained by SEGEN have demonstrate its advantages over the other state-of-the-art representation learning models.

READ FULL TEXT
research
05/19/2018

GEN Model: An Alternative Approach to Deep Neural Network Models

In this paper, we introduce an alternative approach, namely GEN (Genetic...
research
05/19/2018

Deep Loopy Neural Network Model for Graph Structured Data Representation Learning

Existing deep learning models may encounter great challenges in handling...
research
07/01/2020

On Linear Identifiability of Learned Representations

Identifiability is a desirable property of a statistical model: it impli...
research
10/13/2022

NoMorelization: Building Normalizer-Free Models from a Sample's Perspective

The normalizing layer has become one of the basic configurations of deep...
research
06/11/2020

Mixup Training as the Complexity Reduction

Machine learning has achieved remarkable results in recent years due to ...
research
06/27/2023

Homological Neural Networks: A Sparse Architecture for Multivariate Complexity

The rapid progress of Artificial Intelligence research came with the dev...
research
04/03/2019

Model Slicing for Supporting Complex Analytics with Elastic Inference Cost and Resource Constraints

Deep learning models have been used to support analytics beyond simple a...

Please sign up or login with your details

Forgot password? Click here to reset