Sample Summary with Generative Encoding

01/15/2022
by   David Banh, et al.
0

With increasing sample sizes, all algorithms require longer run times that scales at best logarithmically. A concept that summarises the sample space to reduce the total number of samples into a core set that can be used for regression tasks is introduced. This idea of summarisation is called folding - the technique for projecting data into a lower dimensional subspace, whereas unfolding projects it back into the original space. Results for a prediction task show that information is retained during folding as accuracy after unfolding is still comparable to prediction without summarisation.

READ FULL TEXT
research
05/18/2021

Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set o...
research
01/22/2020

Learning functions varying along an active subspace

Many functions of interest are in a high-dimensional space but exhibit l...
research
06/27/2012

On the Number of Samples Needed to Learn the Correct Structure of a Bayesian Network

Bayesian Networks (BNs) are useful tools giving a natural and compact re...
research
11/19/2022

An experimental study on Synthetic Tabular Data Evaluation

In this paper, we present the findings of various methodologies for meas...
research
05/31/2019

High-low level support vector regression prediction approach (HL-SVR) for data modeling with input parameters of unequal sample sizes

Support vector regression (SVR) has been widely used to reduce the high ...
research
09/07/2022

Multitask Learning via Shared Features: Algorithms and Hardness

We investigate the computational efficiency of multitask learning of Boo...
research
05/19/2023

Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs

A popular approach for improving the correctness of output from large la...

Please sign up or login with your details

Forgot password? Click here to reset