Information, learning and falsification

10/17/2011 ∙ by David Balduzzi, et al. ∙ Max Planck Society 0

There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies, counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

  • [1] Li M, Vitányi P (2008) An Introduction to Kolmogorov Complexity and Its Applications. Springer.
  • [2] Shannon C (1948) A mathematical theory of communication. Bell Systems Tech J 27:379–423.
  • [3] Vapnik V (1998) Statistical Learning Theory. John Wiley & Sons.
  • [4] Balduzzi D, Tononi G (2008) Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework. PLoS Comput Biol 4:e1000091. doi:10.1371/journal.pcbi.1000091.
  • [5] Balduzzi D (in press) Falsification and Future Performance.

    In: Proceedings of Solomonoff 85th Memorial Conference. Springer Lecture Notes in Artificial Intelligence.

  • [6] Popper K (1959) The Logic of Scientific Discovery. Hutchinson.
  • [7] Pearl J (2000) Causality: models, reasoning and inference. Cambridge University Press.
  • [8] Boucheron S, Lugosi G, Massart P (2000) A Sharp Concentration Inequality with Applications. Random Structures and Algorithms 16:277–292.
  • [9] Bousquet O, Boucheron S, Lugosi G (2004) Introduction to Statistical Learning Theory.

    In: Bousquet O, von Luxburg U, Rätsch G, editors, Advanced Lectures on Machine Learning, Springer. pp. 169–207.

  • [10] Corfield D, Schölkopf B, Vapnik V (2009) Falsification and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions. Journal for General Philosophy of Science 40:51–58.