Combining datasets to increase the number of samples and improve model fitting

by   Thu Nguyen, et al.

For many use cases, combining information from different datasets can be of interest to improve a machine learning model's performance, especially when the number of samples from at least one of the datasets is small. However, a potential challenge in such cases is that the features from these datasets are not identical, even though there are some commonly shared features among the datasets. To tackle this challenge, we propose a novel framework called Combine datasets based on Imputation (ComImp). In addition, we propose a variant of ComImp that uses Principle Component Analysis (PCA), PCA-ComImp in order to reduce dimension before combining datasets. This is useful when the datasets have a large number of features that are not shared between them. Furthermore, our framework can also be utilized for data preprocessing by imputing missing data, i.e., filling in the missing entries while combining different datasets. To illustrate the power of the proposed methods and their potential usages, we conduct experiments for various tasks: regression, classification, and for different data types: tabular data, time series data, when the datasets to be combined have missing data. We also investigate how the devised methods can be used with transfer learning to provide even further model training improvement. Our results indicate that the proposed methods are somewhat similar to transfer learning in that the merge can significantly improve the accuracy of a prediction model on smaller datasets. In addition, the methods can boost performance by a significant margin when combining small datasets together and can provide extra improvement when being used with transfer learning.


page 1

page 2

page 3

page 4


Principle Components Analysis based frameworks for efficient missing data imputation algorithms

Missing data is a commonly occurring problem in practice, and imputation...

Multi-Task Learning with Incomplete Data for Healthcare

Multi-task learning is a type of transfer learning that trains multiple ...

Multi-task manifold learning for small sample size datasets

In this study, we develop a method for multi-task manifold learning. The...

Are labels informative in semi-supervised learning? – Estimating and leveraging the missing-data mechanism

Semi-supervised learning is a powerful technique for leveraging unlabele...

Transfer-Learning Across Datasets with Different Input Dimensions: An Algorithm and Analysis for the Linear Regression Case

With the development of new sensors and monitoring devices, more sources...

PCA-Based Missing Information Imputation for Real-Time Crash Likelihood Prediction Under Imbalanced Data

The real-time crash likelihood prediction has been an important research...

Target PCA: Transfer Learning Large Dimensional Panel Data

This paper develops a novel method to estimate a latent factor model for...

Please sign up or login with your details

Forgot password? Click here to reset