Assessing, testing and estimating the amount of fine-tuning by means of active information

A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in order to reach a certain target. A function f quantifies specificity for each possible outcome x of a search, so that the target of the algorithm is a set of highly specified states, whereas fine-tuning occurs if it is much more likely for the algorithm to reach the target than by chance. The distribution of a random outcome X of the algorithm involves a parameter θ that quantifies how much background information that has been infused. A simple choice of this parameter is to use θ f in order to exponentially tilt the distribution of the outcome of the search algorithm under the null distribution of no tuning, so that an exponential family of distributions is obtained. Such algorithms are obtained by iterating a Metropolis-Hastings type of Markov chain, and this makes it possible to compute the their active information under equilibrium and non-equilibrium of the Markov chain, with or without stopping when the targeted set of fine-tuned states has been reached. Other choices of tuning parameters θ are discussed as well. Nonparametric and parametric estimators of active information and tests of fine-tuning are developed when repeated and independent outcomes of the algorithm are available. The theory is illustrated with examples from cosmology, student learning, reinforcement learning, a Moran type model of population genetics, and evolutionary programming.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2022

lo-fi: distributed fine-tuning without communication

When fine-tuning large neural networks, it is common to use multiple nod...
research
09/30/2021

Scalable Online Planning via Reinforcement Learning Fine-Tuning

Lookahead search has been a critical component of recent AI successes, s...
research
11/12/2018

Fine-tuning of Language Models with Discriminator

Cross-entropy loss is a common choice when it comes to multiclass classi...
research
02/27/2017

CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

Item Response Theory (IRT) allows for measuring ability of Machine Learn...
research
09/19/2023

Using fine-tuning and min lookahead beam search to improve Whisper

The performance of Whisper in low-resource languages is still far from p...
research
04/02/2023

Instance-level Trojan Attacks on Visual Question Answering via Adversarial Learning in Neuron Activation Space

Malicious perturbations embedded in input data, known as Trojan attacks,...
research
02/12/2018

Estimating Diffusion With Compound Poisson Jumps Based On Self-normalized Residuals

This paper considers parametric estimation problem of the continuous par...

Please sign up or login with your details

Forgot password? Click here to reset