Open Problems in Universal Induction & Intelligence

by   Marcus Hutter, et al.

Specialized intelligent systems can be found everywhere: finger print, handwriting, speech, and face recognition, spam filtering, chess and other game programs, robots, et al. This decade the first presumably complete mathematical theory of artificial intelligence based on universal induction-prediction-decision-action has been proposed. This information-theoretic approach solidifies the foundations of inductive inference and artificial intelligence. Getting the foundations right usually marks a significant progress and maturing of a field. The theory provides a gold standard and guidance for researchers working on intelligent algorithms. The roots of universal induction have been laid exactly half-a-century ago and the roots of universal intelligence exactly one decade ago. So it is timely to take stock of what has been achieved and what remains to be done. Since there are already good recent surveys, I describe the state-of-the-art only in passing and refer the reader to the literature. This article concentrates on the open problems in universal induction and its extension to universal intelligence.



There are no comments yet.


page 1

page 2

page 3

page 4


An Approximation of the Universal Intelligence Measure

The Universal Intelligence Measure is a recently proposed formal definit...

One Decade of Universal Artificial Intelligence

The first decade of this century has seen the nascency of the first math...

Ultimate Intelligence Part III: Measures of Intelligence, Perception and Intelligent Agents

We propose that operator induction serves as an adequate model of percep...

Bad Universal Priors and Notions of Optimality

A big open question of algorithmic information theory is the choice of t...

A Theory of Universal Artificial Intelligence based on Algorithmic Complexity

Decision theory formally solves the problem of rational agents in uncert...

Universal Induction with Varying Sets of Combinators

Universal induction is a crucial issue in AGI. Its practical applicabili...

Evaluating the Apperception Engine

The Apperception Engine is an unsupervised learning system. Given a sequ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

What is a good model of the weather changes? Are there useful models of the world economy? What is the true regularity behind the number sequence 1,4,9,16,…? What is the correct relationship between mass, force, and acceleration of a physical object? Is there a causal relation between interest rates and inflation? Are models of the stock market purely descriptive or do they have any predictive power?

Induction. The questions above look like a set of unrelated inquires. What they have in common is that they seem to be amenable to scientific investigation. They all ask about a model for or relation between observations. The purpose seems to be to explain or understand the data. Generalizing from data to general rules is called inductive inference, a core problem in philosophy [Hum39, Pop34, How03] and a key task of science [Lev74, Ear93, Wal05].

But why do or should we care about modeling the world? Because this is what science is about [Sal06]? As indicated above, models should be good, useful, true, correct, causal, predictive, or descriptive [FH06]. Digging deeper, we see that models are mostly used for prediction in related but new situations, especially for predicting future events [Wik08].

Predictions. Consider the apparently only slight variation of the questions above: What is the correct answer in an IQ test asking to continue the sequence 1,4,9,16,…? Given historic stock-charts, can one predict the quotes of tomorrow? Or questions like: Assuming the sun rose every day for 5000 years, how likely is doomsday (that the sun will not rise) tomorrow? What is my risk of dying from cancer next year?

These questions are instances of the important problem of time-series forecasting, also called sequence prediction [BD02, CBL06]. While inductive inference is about finding models or hypotheses that explain the data (whatever explain actually shall mean), prediction is concerned about forecasting the future. Finding models is interesting and useful, since they usually help us to (partially) answer such predictive questions [Gei93, Cha03b]. While the usefulness of predictions is clearer to the layman than the purpose of the scientific inquiry for models, one may again ask, why we do or should we care about making predictions?

Decisions. Consider the following questions: Shall I take my umbrella or wear sunglasses today? Shall I invest my assets in stocks or bonds? Shall I skip work today because it might be my last day on earth? Shall I irradiate or remove the tumor of my patient? These questions ask for decisions that have some (minor to drastic) consequences. We usually want to make “good” decisions, where the quality is measured in terms of some reward (money, life expectancy) or loss [Fer67, DeG70, Jef83]. In order to compute this reward as a function of our decision, we need to predict the environment: whether there will be rain or sunshine today, whether the market will go up or down, whether doomsday is tomorrow, or which type of cancer the patient has. Often forecasts are uncertain [Par95], but this is still better than no prediction. Once we arrived at a (hopefully good) decision, what do we do next?

Actions. The obvious thing is to execute the decision, i.e. to perform some action consistent with the decision arrived at. The action may not influence the environment, like taking umbrella versus sunglasses does not influence the future weather (ignoring the butterfly effect) or small stock trades. These settings are called passive [Hut03d], and the action part is of marginal importance and usually not discussed. On the other hand, a patient might die from a wrong treatment, or a chess player loses a figure and possibly the whole game by making one mistake. These settings are called (re)active [Hut07c], and their analysis is immensely more involved than the passive case [Ber06].

And now? There are many theories and algorithms and whole research fields and communities dealing with some aspects of induction, prediction, decision, or action. Some of them will be detailed below. Finding solutions for every particular (new) problem is possible and useful for many specific applications. Trouble is that this approach is cumbersome and prone to disagreement or contradiction [Kem03]. Some researchers feel that this is the nature of their discipline and one can do little about it [KLW06]. But in science (in particular math, physics, and computer science) previously separate approaches are constantly being unified towards more and more powerful theories and algorithms [GSW00, Gre00]. There is at least one field, where we must put everything (induction+prediction+decision+action) together in a completely formal (preferably elegant) way, namely Artificial Intelligence [RN03]. Such a general and formal theory of AI has been invented about a decade ago [Hut00].

Contents. In Section 2 I will give a brief introduction into this universal theory of AI. It is based on an unexpected unification of algorithmic information theory and sequential decision theory. The corresponding AIXI agent is the first sound, complete, general, rational agent in any relevant but unknown environment with reinforcement feedback [Hut05, OC06]. It is likely the best possible such agent in a sense to be explained below.

Section 3 describes the historic origin of the AIXI model. One root is Solomonoff’s theory [Sol60] of universal induction, which is closely connected to algorithmic complexity. The other root is Bellman’s adaptive control theory [Bel57] for optimal sequential decision making. Both theories are now half-a-century old. From an algorithmic information theory perspective, AIXI generalizes optimal passive universal induction to the case of active agents. From a decision-theoretic perspective, AIXI is a universal Bayes-optimal learning algorithm.

Sections 47 constitute the core of this article describing the open problems around universal induction & intelligence. Most of them are taken from the book [Hut05] and paper [Hut07b]. I focus on questions whose solution has a realistic chance of advancing the field. I avoid technical open problems whose global significance is questionable.

Solomonoff’s half-a-century-old theory of universal induction is already well developed. Naturally, most remaining open problems are either philosophically or technically deep.

Its generalization to Universal Artificial Intelligence seems to be quite intricate. While the AIXI model itself is very elegant, its analysis is much more cumbersome. Although AIXI has been shown to be optimal in some senses, a convincing notion of optimality is still lacking. Convergence results also exist, but are much weaker than in the passive case.

Its construction makes it plausible that AIXI is the optimal rational general learning agent, but unlike the induction case, victory cannot be claimed yet. It would be natural, hence, to compare AIXI to alternatives, if there were any. Since there are no competitors yet, one could try to create some. Finally, AIXI is only “essentially” unique, which gives rise to some more open questions.

Given that AI is about designing intelligent systems, a serious attempt should be made to formally define intelligence in the first place. Astonishingly there have been not too many attempts. There is one definition that is closely related to AIXI, but its properties have yet to be explored.

The final Section 8 briefly discusses the flavor, feasibility, difficulty, and interestingness of the raised questions, and takes a step back and briefly compares the information-theoretic approach to AI discussed in this article to others.

2 Universal Artificial Intelligence

Artificial Intelligence. The science of artificial intelligence (AI) may be defined as the construction of intelligent systems (artificial agents) and their analysis [RN03]. A natural definition of a system is anything that has an input and an output stream, or equivalently an agent that acts and observes. Intelligence is more complicated. It can have many faces like creativity, solving problems, pattern recognition, classification, learning, induction, deduction, building analogies, optimization, surviving in an environment, language processing, planning, and knowledge acquisition and processing. Informally, AI is concerned with developing agents that perform well in a large range of environments [LH07c]. A formal definition incorporating every aspect of intelligence, however, seems difficult. In order to solve this problem we need to solve the induction, prediction, decision, and action problem, which seems like a daunting (some even claim impossible) task: Intelligent actions are based on informed decisions. Attaining good decisions requires predictions which are typically based on models of the environments. Models are constructed or learned from past observations via induction. Fortunately, based on the deep philosophical insights and powerful mathematical developments listed in Section 3, these problems have been overcome, at least in theory.

Universal Artificial Intelligence (UAI). Most, if not all, known facets of intelligence can be formulated as goal driven or, more precisely, as maximizing some reward or utility function. It is, therefore, sufficient to study goal-driven AI; e.g. the (biological) goal of animals and humans is to survive and spread. The goal of AI systems should be to be useful to humans. The problem is that, except for special cases, we know neither the utility function nor the environment in which the agent will operate in advance. What do we need (from a mathematical point of view) to construct a universal optimal learning agent interacting with an arbitrary unknown environment? The theory, coined AIXI, developed in this decade and explained in [Hut05] says: All you need is Occam [Fra02], Epicurus [Asm84], Turing [Tur36], Bayes [Bay63], Solomonoff [Sol64], Kolmogorov [Kol65], and Bellman [Bel57]: Sequential decision theory [Ber06] (Bellman

’s equation) formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. If the environment is unknown,

Bayesians [Ber93] replace the true distribution by a weighted mixture of distributions from some (hypothesis) class. Using the large class of all (semi)measures that are (semi)computable on a Turing machine bears in mind Epicurus, who teaches not to discard any (consistent) hypothesis. In order not to ignore Occam, who would select the simplest hypothesis, Solomonoff defined a universal prior that assigns high/low prior weight to simple/complex environments, where Kolmogorov quantifies complexity [Hut07a, LV08]. All other concepts and phenomena attributed to intelligence are emergent. All together, this solves all conceptual problems [Hut05], and “only” computational problems remain.

Kolmogorov complexity. Kolmogorov [Kol65] defined the complexity of a string over some finite alphabet as the length of a shortest description

on a universal Turing machine


A string is simple if it can be described by a short program, like “the string of one million ones”, and is complex if there is no such short description, like for a random string whose shortest description is specifying it bit-by-bit. For non-string objects one defines , where is some standard code for . Kolmogorov complexity [Kol65, Hut08] is a key concept in (algorithmic) information theory [LV08]. An important property of is that it is nearly independent of the choice of , i.e. different choices of change “only” by an additive constant (see Section 4h). Furthermore it leads to shorter codes than any other effective code. shares many properties with Shannon’s entropy (information measure) [Mac03, CT06], but is superior to in many respects. Foremost, measures the information of individual outcomes, while

can only measure expected information of random variables. To be brief,

is an excellent universal complexity measure, suitable for quantifying Occam’s razor. The major drawback of as complexity measure is its incomputability. So in practical applications it has always to be approximated, e.g. by Lempel-Ziv compression [LZ76, CV05], or by CTW [WST97] compression, or by using two-part codes like in MDL and MML, or by others.

Solomonoff induction. Solomonoff [Sol64] defined (earlier) the closely related universal a priori probability as the probability that the output of a universal (monotone) Turing machine starts with when provided with fair coin flips on the input tape [HLV07]. Formally,

where the sum is over all (possibly non-halting) so-called minimal programs which output a string starting with . Since the sum is dominated by short programs, we have (formally ), i.e. simple/complex strings are assigned a high/low a-priori probability. A different representation is as follows [ZL70]: Let be a countable class of probability measures (environments) on infinite sequences , be the true sampling distribution, i.e.  is the true probability that an infinite sequences starts with , and be the -weighted average called Bayesian mixture distribution. One can show that , where includes all computable probability measures and . More precisely, consists of an effective enumeration of all so-called lower semi-computable semi-measures , and [LV08].

can be used as a universal sequence predictor, which outperforms in a strong sense all other predictors. Consider the classical online sequence prediction task: Given , predict ; then observe the true ; ; repeat. For generated by the unknown “true” distribution , one can show [Sol78] that the universal predictor rapidly converges to the true probability of the next observation given history . That is, serves as an excellent predictor of any sequence sampled from any computable probability distribution.

The AIXI model. It is possible to write down the AIXI model explicitly in one line [Hut07c], although one should not expect to be able to grasp the full meaning and power from this compact representation.

AIXI is an agent that interacts with an environment in cycles . In cycle , AIXI takes action (e.g. a limb movement) based on past perceptions as defined below. Thereafter, the environment provides a (regular) observation (e.g. a camera image) to AIXI and a real-valued reward . The reward can be very scarce, e.g. just +1 (-1) for winning (losing) a chess game, and 0 at all other times. Then the next cycle starts. Given the above, AIXI is defined by:

The expression shows that AIXI tries to maximize its total future reward . If the environment is modeled by a deterministic program , then the future perceptions can be computed, where is a universal (monotone Turing) machine executing given . Since is unknown, AIXI has to maximize its expected reward, i.e. average over all possible perceptions created by all possible environments . The simpler an environment, the higher is its a-priori contribution , where simplicity is measured by the length of program . The inner sum generalizes Solomonoff’s a-priori distribution by including actions. Since noisy environments are just mixtures of deterministic environments, they are automatically included. The sums in the formula constitute the averaging process. Averaging and maximization have to be performed in chronological order, hence the interleaving of max and (similarly to minimax for games). The value of AIXI (or any other agent) is its expected reward sum.

One can fix any finite action and perception space, any reasonable , and any large finite lifetime . This completely and uniquely defines AIXI’s actions , which are limit-computable via the expression above (all quantities are known).

That’s it!

Ok, not really. It takes a whole book and more to explain why AIXI likely is the most intelligent general-purpose agent and incorporates all aspects of rational intelligence. In practice, AIXI needs to be approximated. AIXI can also be regarded as the gold standard which other practical general purpose AI programs should aim at (analogue to minimax approximations/heuristics).

The role of AIXI for AI. The AIXI model can be regarded as the first complete theory of AI. Most if not all AI problems can easily be formulated within this theory, which reduces the conceptual problems to pure computational questions. Solving the conceptual part of a problem often causes a quantum leap forward in a field. Two analogies may help: QED is a complete theory of all chemical processes. ZFC solved the conceptual problems of sets (e.g. Russell’s paradox).

From an algorithmic information theory (AIT) perspective, the AIXI model generalizes optimal passive universal induction to the case of active agents. From a decision-theoretic perspective, AIXI is a suggestion of a new (implicit) “learning” algorithm, which may overcome all (except computational) problems of previous reinforcement learning algorithms. If the optimality theorems of universal induction and decision theory generalize to the unified AIXI model, we would have, for the first time, a universal (parameterless) model of an optimal rational agent in any computable but unknown environment with reinforcement feedback.

Although deeply rooted in algorithm theory, AIT mainly neglects computation time and so does AIXI. It is important to note that this does not make the AI problem trivial. Playing chess optimally or solving NP-complete problems become trivial, but driving a car or surviving in nature do not. This is because it is a challenge itself to well-define the latter problems, not to mention presenting an algorithm. In other words: The AI problem has not yet been well defined (cf. the quote after the abstract). One may view AIXI as a suggestion of such a mathematical definition.

Although Kolmogorov complexity is incomputable in general, Solomonoff’s theory triggered an entire field of research on computable approximations. This led to numerous practical applications [LV07]. If the AIXI model should lead to a universal “active” decision maker with properties analogous to those of universal “passive” predictors, then we could expect a similar stimulation of research on resource-bounded, practically feasible variants. First attempt have been made to test the power and limitations of AIXI and downscaled versions like AIXI and AI [PH06b, Pan08], as well as related models derived from basic concepts of algorithmic information theory.

So far, some remarkable and surprising results have already been obtained (see Section 3). A 2, 12, 60, 300 page introduction to the AIXI model can be found in [Hut01e, Hut01d, Hut07c, Hut05], respectively, and a gentle introduction to UAI in [Leg08].

3 History and State-of-the-Art

The theory of UAI and AIXI build on the theories of universal induction, universal prediction, universal decision making, and universal agents. From a historical and research-field perspective, the AIXI model is based on two otherwise unconnected fundamental theories:

  • The major basis is Algorithmic information theory [LV08], initiated by [Sol64, Kol65, Cha66], which builds the foundation of complexity and randomness of individual objects. It can be used to quantify Occam’s razor principle (use the simplest theory consistent with the data). This in turn allowed Solomonoff to come up with a universal theory of induction [Sol64, Sol78].

  • The other basis is the theory of optimal sequential decisions, initiated by Von Neumann [NM44] and Bellman [Bel57]. This theory builds the basis of modern reinforcement learning [SB98].

This section outlines the history and state-of-the-art of the theories and research fields involved in the AIXI model.

Algorithmic information theory (AIT). In the 1960’s [Kol65, Sol64, Cha66] introduced a new machine independent complexity measure for arbitrary computable data. The Kolmogorov complexity is defined as the length of the shortest program on a universal Turing machine that computes . It is closely related to Solomonoff’s universal a-priori probability (see above), Martin-Löf randomness of individual sequences [ML66], time-bounded complexity [Lev84], universal optimal search [Lev73], the speed prior [Sch02b], the halting probability [Cha87], strong mathematical undecidability [Cha03a], generalized probability and complexity [Sch02a], algorithmic statistics [GTV01, VV02, Vit02], and others.

Despite its incomputability, AIT found many applications in philosophy, practice, and science: The minimum message/description length (MML/MDL) principles [WB68, Ris78, Ris89]

can be regarded as a practical approximation of Kolmogorov complexity. MML&MDL are widely used in machine learning applications

[QR89, GL89, MJ93, Ped89, Wal05, Grü07]. The latest, most direct and impressive applications are via the universal similarity metric [CV05, CV06]

. Schmidhuber produced another range of impressive applications to neural networks

[Sch97a, SZW97], in search problems [Sch04], and even in the fine arts [Sch97b]. By carefully approximating Kolmogorov complexity, AIT sometimes lead to results unmatched by other approaches. Besides these practical applications, AIT is used to simplify proofs via the incompressibility method, improves Shannon information, is used in reversible computing, physical entropy and Maxwell daemon issues, artificial intelligence, and the asymptotically fastest algorithm for all well-defined problems [Cal02, Hut05, Hut02a, Hut07a, LV08].

Universal Solomonoff induction. How and in which sense induction is possible at all has been subject to long philosophical controversies [Hum39, Sto01, Hut05]. Highlights are Epicurus’ principle of multiple explanations [Asm84], Occam’s razor (simplicity) principle [Fra02], and Bayes’ rule for conditional probabilities [Bay63, Ear93]. Solomonoff [Sol64] elegantly unified these aspects with the concept of universal Turing machines [Tur36] to one formal theory of inductive inference based on a universal probability distribution , which is closely related to Kolmogorov complexity (). The theory allows for optimally predicting sequences without knowing their true generating distribution [Sol78], and presumably solves the induction problem. The theory remained for more than 20 years at this stage, till the work on AIXI started, which resulted in a beautiful elaboration and extension of Solomonoff’s theory.

Meanwhile, the (non)existence of universal priors for several generalized computability concepts [Sch02a, Hut03b, Hut06b]

has been classified, rapid convergence of

to the unknown true environmental distribution [Hut01a] and tight error [Hut01c]

and loss bounds for arbitrary bounded loss functions and finite alphabet

[Hut01b, Hut03a] have been proven, and (Pareto) optimality of [Hut03d, Hut03b] has been shown, exemplified on games of chance and compared to predictions with expert advice [Hut03d, Hut04b]. The bounds have been further improved by introducting a version of Kolmogorov complexity that is monotone in the condition [CH05, CHS07]. Similar but necessarily weaker non-asymptotic bounds for universal deterministic/one-part MDL [Hut03e, Hut06d] and discrete two-part MDL [PH04a, PH05a, PH04b, PH06a] have also been proven. Quite unexpectedly [Hut03c] does not converge on all Martin-Löf random sequences [HM04], but there is a sophisticated remedy [HM07].

All together this shows that Solomonoff’s induction scheme represents a universal (formal, but incomputable) solution to all passive prediction problems. The most recent studies [Hut06c] suggest that this theory could solve the induction problem at whole, or at least constitute a significant progress in this fundamental problem [Hut07b].

Sequential decision theory. Sequential decision theory provides a framework for finding optimal reward-maximizing strategies in reactive environments (e.g. chess playing as opposed to weather forecasting), assuming the environmental probability distribution is known. The Bellman equations [Bel57] are at the heart of sequential decision theory [NM44, Mic66, RN03]. The book [Ber06] summarizes open problems and progress in infinite horizon problems. Sequential decision theory can deal with actions and observations depending on arbitrary past events. This general setup has been called AI model in [Hut05, Hut07c]. Optimality of AI is obvious by construction. This model reduces in special cases to a range of known models.

Reinforcement learning. If the true environmental probability distribution or the reward function are unknown, they need to be learned [SB98]. This dramatically complicates the problem due to the explorationexploitation dilemma [BF85, Duf02, Hut05, SL08]. In order to attack this intrinsically difficult problem, control theorists typically confine themselves to linear systems with quadratic loss function, relevant in the control of (simple) machines, but irrelevant for AI. There are notable exceptions to this confinement, e.g. the book [KV86] on stochastic adaptive control and [ATA89a, ATA89b], and an increasing number of more recent work. Reinforcement learning (RL) (sometimes associated with temporal difference learning or neural nets) is the instantiation of stochastic adaptive control theory [KV86] in the machine learning community. Current research on RL is vast; the most important conferences are ICML, COLT, ECML, ALT, and NIPS; the most important journals are JMLR and MLJ. Some highlights and surveys are [Sam59, BSA83, Sut88, Wat89, WD92, MA93, Tes94, WS98, KK99, WSS99, Bau99, KP00, SLJ03, GKPV03, RH08a, SDL07, SL08, RPPCd08, Hut09b, Hut09a] and [KLM96, KLC98, SB98, BDH99, Ber06] respectively. RL has been applied to a variety of real-world problems, occasionally with stunning success: Backgammon and Checkers [SB98, Chp.11], helicopter control [NCD04], and others. Nevertheless, existing learning algorithms are very limited (typically to Markov domains), and non-optimal — from the very outset they are approximate or asymptotic only. Indeed, AIXI is currently the only general and rigorous mathematical formulation of the addressed problems.

The universal algorithmic agent AIXI. Reinforcement learning algorithms [KLM96, BT96, SB98] are usually used in the case of unknown . They can succeed if the state space is either small or has effectively been made small by generalization techniques. The algorithms work only in restricted, (e.g. Markov) domains, have problems with optimally trading off exploration versus exploitation, have non-optimal learning rate, are prone to diverge, or are otherwise ad hoc.

The formal solution proposed in [Hut01d, Hut05] is to generalize the universal probability to include actions as conditions and replace by in the AI model, resulting in the AIXI model, which is presumably universally optimal. It is quite non-trivial what can be expected from a universally optimal agent and to properly interpret or define universal, optimal, etc [Hut07c]. It is known that converges to also in case of multi-step lookahead as occurs in the AIXI model [Hut04a], and that a variant of AIXI is asymptotically self-optimizing and Pareto optimal [Hut02b, LH04a].

The book [Hut05] gives a comprehensive introduction and discussion of previous achievements on or related to AIXI, including a critical review, more open problems, comparison to other approaches to AI, and philosophical issues.

Important environmental classes. In practice, one is often interested in specific classes of problems rather than the fully universal setting; for example we might be interested in evaluating the performance of an algorithm designed solely for function maximization. A taxonomy of abstract environmental classes from the mathematical perspective of interacting chronological systems [LH04b, Leg08] has been established. The relationships between Bandit problems, MDP problems, ergodic MDPs, higher order MDPs, sequence prediction problems, function optimization problems, strategic games, classification, and many others are formally defined and explored therein. The work also suggests new abstract environmental classes that could be useful from an analytic perspective. In [Hut05], each problem class is formulated in its natural way for known , and then a formulation within the AI model is constructed and their equivalence is shown. Then, the consequences of replacing by are considered, and in which sense the problems are formally solved by AIXI.

Computational aspects. The major drawback of AIXI is that it is incomputable, or more precisely, only asymptotically computable, which makes a direct implementation impossible. To overcome this problem, the AIXI model can be scaled down to a model coined AIXI, which is still superior to any other time and length bounded agent [Hut01d, Hut05]. The computation time of AIXI is of the order . A way of overcoming the large multiplicative constant is possible at the expense of an (unfortunately even larger) additive constant. The constructed algorithm builds upon Levin search [Lev73, Gag07]. The algorithm is capable of solving all well-defined problems as quickly as the fastest algorithm computing a solution to , save for a factor of and lower-order additive terms [Hut02a]. The solution requires an implementation of first-order logic, the definition of a universal Turing machine within it and a proof theory system. The algorithm as it is, is only of theoretical interest, but there are more practical variations [Sch04, Sch05]. A different, more limited but more practical scaled-down version (coined AI) has been implemented and applied successfully to 22 matrix games like the notoriously difficult repeated prisoner problem and generalized variants thereof [PH06b].

4 Open Problems in Universal Induction

The induction problem is a fundamental problem in philosophy [Hum39, Ear93] and science [Jay03]. Solomonoff’s model is a promising universal solution of the induction problem. In [Hut07b], an attempt has been made to collect the most important fundamental philosophical and statistical problems, regarded as open, and to present arguments and proofs that Solomonoff’s theory overcomes them. Despite the force of the arguments, they are likely not yet sufficient to convince the (scientific) world that the induction problem is solved. The discussion needs to be rolled out much further, say, at least one generally accessible article per one allegedly open problem. Indeed, this endeavor might even discover some catch in Solomonoff’s theory. Some problems identified and outlined in [Hut07b] worth to investigate in more detail are:

  1. The zero prior problem. The problem is how to confirm universal hypotheses like “all balls in some urn (or all ravens) are black”. A natural model is to assume that balls (or ravens) are drawn randomly from an infinite population with fraction of black balls (or ravens) and to assume some prior density over (a uniform density gives the Bayes-Laplace model). Now we draw objects and observe that they are all black. The problem is that the posterior proability

    , since the prior probability

    . Maher’s [Mah04] approach does not solve the problem [Hut07b].

  2. The black raven paradox by Carl Gustav Hempel goes as follows [Res01, Ch.11.4]: Observing lack avens confirms the hypothesis that all ravens are black. In general, hypothesis is confirmed by -instances with property . Formally substituting and leads to hypothesis is confirmed by -instances with property . But since and are logically equivalent, must also be confirmed by -instance with property . Hence by , observing Black Ravens confirms Hypothesis , so by , observing White Socks also confirms that all Ravens are Black, since White Socks are non-Ravens which are non-Black. But this conclusion is absurd. Again, neither Maher’s nor any other approach solves this problem.

  3. The Grue problem [Goo83]. Consider the following two hypotheses: “All emeralds are green”, and “All emeralds found till year 2020 are green, thereafter all emeralds are blue”. Both hypotheses are equally well supported by empirical evidence. Occam’s razor seems to favor the more plausible hypothesis , but by using new predicates grue:=“green till y2020 and blue thereafter” and bleen:=“blue till y2020 and green thereafter”, gets simpler than .

  4. Reparametrization invariance [KW96]. The question is how to extend the symmetry principle from finite hypothesis classes (all hypotheses are equally likely) to infinite hypothesis classes. For “compact” classes, Jeffrey’s prior [Jef46] is a solution, but for non-compact spaces like or , classical statistical principles lead to improper distributions, which are often not acceptable.

  5. Old-evidence/updating problem and ad-hoc hypotheses [Gly80]. How shall a Bayesian treat the case when some evidence (e.g. Mercury’s perihelion advance) is known well-before the correct hypothesis/theory/model (Einstein’s general relativity theory) is found? How shall be added to the Bayesian machinery a posteriori? What is the prior of ? Should it be the belief in in a hypothetical counterfactual world in which is not known? Can old evidence confirm ? After all, could simply be constructed/biased/fitted towards “explaining” . Strictly speaking, a Bayesian needs to choose the hypothesis/model class before seeing the data, which seldom reflects scientific practice [Ear93].

  6. Other issues/problems. Comparison to Carnap’s confirmation theory [Car52] and Laplace rule [Lap12], allowing for continuous model classes, how to incorporate prior knowledge [Pre02, Gol06], and others.

Solomonoff’s theory has already been intensively studied in the predictive setting [Sol78, Hut01c, Hut03d, Hut03a, CHS07] mostly confirming its power, with the occasional unexpected exception [HM07]. Important open questions are:

  1. Prediction of selected bits. Consider a very simple and special case of problem 5i

    , a binary sequence that coincides at even times with the preceding (odd) bit, but is otherwise incomputable. Every child will quickly realize that the even bits coincide with the preceding odd bit, and after a while perfectly predict the even bits, given the past bits. The incomputability of the sequence is no hindrance. It is unknown whether Solomonoff works or fails in this situation. I expect that a solution of this special case will lead to general useful insights and advance this theory (cf. problem


  2. Identification of “natural” Turing machines. In order to pin down the additive/multiplicative constants that plague most results in AIT, it would be highly desirable to identify a class of “natural” UTMs/USMs which have a variety of favorable properties. A more moderate approach may be to consider classes of universal Turing machines (UTM) or universal semimeasures (USM) satisfying certain properties and showing that the intersection is not empty. Indeed, very occasionally results in AIT only hold for particular (subclasses of) UTMs [MP02]. A grander vision is to find the single “best” UTM or USM [Mül06] (a remarkable approach).

  3. Martin-Löf convergence. Quite unexpectedly, a loophole in the proof of Martin-Löf (M.L.) convergence of to in the literature has been found [Hut03c]. In [HM04] it has been shown that this loophole cannot be fixed, since M.L.-convergence actually can fail. The construction of non-universal (semi)measures and that M.L. converge to [HM07] partially rescued the situation. The major problem left open is the convergence rate for . The current bound for is double exponentially worse than for . It is also unknown whether convergence in ratio holds. Finally, there could still exist universal semimeasures (dominating all enumerable semimeasures) for which M.L.-convergence holds. In case they exist, they probably have particularly interesting additional structure and properties.

  4. Generalized mixtures and convergence concepts. Another interesting and potentially fruitful approach to the above convergence problem is to consider other classes of semimeasures [Sch02b, Sch02a, Hut03b], define mixtures over , and (possibly) generalized randomness concepts by using this to define a generalized notion of randomness. Using this approach, in [Hut06b]

    it has been shown that convergence holds for a subclass of Bernoulli distributions if the class is dense, but fails if the class is gappy, showing that a denseness characterization of

    could be promising in general. See also [RH07, RH08b].

  5. Lower convergence bounds and defect of . One can show that , i.e. the probability of making a wrong prediction converges to zero slower than any computable summable function. This shows that, although converges rapidly to in a cumulative sense, occasionally, namely for simply describable , the prediction quality is poor. An easy way to show the lower bound is to exploit the semimeasure defect of . Do similar lower bounds hold for a proper (Solomonoff) normalized measure ? I conjecture the answer is yes, i.e. the lower bound is not a semimeasure artifact, but “real”.

  6. Using AIXI for prediction. Since AIXI is a unification of sequential decision theory with the idea of universal probability one may think that the AIXI model for a sequence prediction problem exactly reduces to Solomonoff’s universal sequence prediction scheme. Unfortunately this is not the case. For one reason, is only a probability distribution on the inputs but not on the outputs. This is also one of the origins of the difficulty of proving general value bounds for AIXI. The questions is whether, nevertheless, AIXI predicts sequences as well as Solomonoff’s scheme. A first weak bound in a very restricted setting has been proven in [Hut05, Sec.6.2], showing that progress in this question is possible.

The most important open, but unfortunately likely also the hardest, problem is the formal identification of natural universal (Turing) machines (h). A proper solution would eliminate one of the two most important critiques of the whole field of AIT. Item (l) is an important question for universal AI.

5 Open Problems regarding Optimality of AIXI

AIXI has been shown to be Pareto-optimal and a variant of AIXI to be self-optimizing [Hut02b]. These are important results supporting the claim that AIXI is universally optimal. More results can be found in [Hut05]. Unlike the induction case, the results are not strong enough to alley all doubts. Indeed, the major problem is not to prove optimality but to come up with a sufficiently strong but still satisfiable optimality notion in the reinforcement learning case. The following items list four potential approaches towards a solution:

  1. What is meant by universal optimality? A “learner” (like AIXI) may converge to the optimal informed decision maker (like AI) in several senses. Possibly relevant concepts from statistics are, consistency, self-tuningness, self-optimizingness, efficiency, unbiasedness, asymptotically or finite convergence [KV86], Pareto-optimality, and some more defined in [Hut05]. Some concepts are stronger than necessary, others are weaker than desirable but suitable to start with. It is necessary to investigate in more breadth which properties the AIXI model satisfies.

  2. Limited environmental classes. The problem of defining and proving general value bounds becomes more feasible by considering, in a first step, restricted concept classes. One could analyze AIXI for known classes (like Markov or factorizable environments) and especially for the new classes (forgetful, relevant, asymptotically learnable, farsighted, uniform, and (pseudo-)passive) defined in [Hut05].

  3. Generaliztion of AIXI to general Bayes mixtures. Alternatively one can generalize AIXI to AI, where is a general Bayes-mixture of distributions in some class and prior . If is the multi-set of all enumerable semi-measures, then AI coincides with AIXI. If is the (multi)set of passive semi-computable environments, then AIXI reduces to Solomonoff’s optimal predictor [Hut03d]. The key is not to prove absolute results for specific problem classes, but to prove relative results of the form “if there exists a policy with certain desirable properties, then AI also possesses these desirable properties”. If there are tasks which cannot be solved by any policy, AI should not be blamed for failing.

  4. Intelligence Aspects of AIXI. Intelligence can have many faces. As argued in [Hut05], it is plausible that AIXI possesses all or at least most properties an intelligent rational agent should posses. Some of the following properties could and should be investigated mathematically: creativity, problem solving, pattern recognition, classification, learning, induction, deduction, building analogies, optimization, surviving in an environment, language processing, planning.

Sources of inspiration can be previously proven loss bounds for Solomonoff sequence prediction generalized to unbounded horizon, optimality results from the adaptive control literature, and the asymptotic self-optimizingness results for the related AI model. Value bounds for AIXI are expected to be, in a sense, weaker than the loss bounds for Solomonoff induction because the problem class covered by AIXI is much larger than the class of sequence prediction problems.

In the same sense as Gittins’ solution to the bandit problem and Laplace’ rule for Bernoulli sequences, AIXI may simply be regarded as (Bayes-)optimal by construction. Even when accepting this “easy way out”, the above questions remain significant: Theorems relating AIXI to AI would no longer be regarded as optimality proofs of AIXI, but just as how much harder it becomes to operate when is unknown, i.e. progress on the items above is simply reinterpreted.

A weaker goal than to prove optimality of AIXI is to ask for reasonable convergence properties:

  1. Posterior convergence for unbounded horizon. Convergence of to holds somewhat surprisingly even for unbounded horizon, which is good news for AIXI. Unfortunately convergence can be slow, but I expect that convergence is “reasonably” fast for “slowly” growing horizon, which is important in AIXI. It would be useful to quantify and prove such a result.

  2. Reinforcement learning. Although there is no explicit learning algorithm built into the AIXI model, AIXI is a reinforcement learning system capable of receiving and exploiting rewards. The system learns by eliminating Turing machines in the definition of once they become inconsistent with the progressing history. This is similar to Gold-style learning [Gol67]. For Markov environments (but not for partially observable environments) there are efficient general reinforcement learning algorithms, like and learning. One could compare the performance (learning speed and quality) of AI to e.g.  and learning, extending [PH06b].

  3. Posterization. Many properties of Kolmogorov complexity, Solomonoff’s prior, and reinforcement learning algorithms remain valid after “posterization”. With posterization I mean replacing the total value , the weights , the complexity , the environment , etc. by their “posteriors” , , , , etc, where is the current cycle and the lifespan of AIXI. Strangely enough for chosen as it is not true that . If this property were true, weak bounds as the one proven in [Hut05, Sec.6.2] (which is too weak to be of practical importance) could be boosted to practical bounds of order 1. Hence, it is highly import to rescue the posterization property in some way. It may be valid when grouping together essentially equal distributions .

  4. Relevant and non-computable environments . Assume that the observations of AIXI contain irrelevant information, like noise. Irrelevance can formally be defined as being statistically independent of future observations and rewards, i.e. neither affecting rewards, nor containing information about future observations. It is easy to see that Solomonoff prediction does not decline under such noise if it is sampled from a computable distribution. This likely transfers to AIXI. More interesting is the case, where the irrelevant input is complex. If it is easily separable from the useful input it should not affect AIXI. One the other hand, even in prediction this problem is non-trivial, see problem 4g. How robustly does AIXI deal with complex but irrelevant inputs? A model that explicitly deals with this situation has been developed in [Hut09b, Hut09a].

  5. Grain of truth problem [KL93]. Assume AIXI is used in a multi-agent setup [Wei00] interacting with other agents. For simplicity I only discuss the case of a single other agent in a competitive setup, i.e. a two-person zero-sum game situation. We can entangle agents and by letting observe ’s actions and vice versa. The rewards are provided externally by the rules of the game. The situation where is AIXI and is a perfect minimax player was analyzed in [Hut05, Sec.6.3]. In multi-agent systems one is mostly interested in a symmetric setup, i.e.  is also an AIXI. Whereas both AIXI may be able to learn the game and improve their strategies (to optimal minimax or more generally Nash equilibrium), this setup violates one of the basic assumptions. Since AIXI is incomputable, AIXI() does not constitute a computable environment for AIXI(). More generally, starting with any class of environments , then AI seems not to belong to class for most (all?) choices of . Various results can no longer be applied, since when coupling two AIs. Many questions arise: Are there interesting environmental classes for which AI or AI? Do AIXI() converge to optimal minimax players? Do AIXI perform well in general multi-agent setups?

From the optimality questions above, the first one (a) is the most important, least defined, and likely hardest one: In which sense can a rational agent in general and AIXI in particular be optimal? The multi-agent setting adds another layer of difficulty: The grain of truth problem (j

) is in my opinion the most important fundamental problem in game theory and multi-agent systems. Its satisfactory solution should be worth a Nobel prize or Turing award.

6 Open Problems regarding Uniqueness of AIXI

As a unification of two optimal theories, it is plausible that AIXI is optimal in the “union” of their domains, which has been affirmed but not finally settled by the positive results derived so far. In the absence of a definite answer, one should be open to alternative models, but no convincing competitor exists to date. Most of the following items describe ideas which, if worked out, might result in alternative models:

  1. Action with expert advice. Expected performance bounds for predictions based on Solomonoff’s prior exist. Inspired by Solomonoff induction, a dual, currently very popular approach, is “prediction with expert advice” (PEA) [LW89, Vov92, CBL06]. Whereas PEA performs well in any environment, but only relative to a given set of experts, Solomonoff’s predictor competes with any

    other predictor, but only in expectation for environments with computable distribution. It seems philosophically less compromising to make assumptions on prediction strategies than on the environment, however weak. PEA has been generalized to active learning

    [PH05b, CBL06], but the full reinforcement learning case is still open [PH06b]. If successful, it could result in a model dual to AIXI, but I expect the answer to be negative, which on the positive side would show the distinguishedness of AIXI. Other ad-hoc approaches like [RH06, RH08a] are also unlikely to be competitive.

  2. Actions as random variables. There may be more than one way for the choice of the generalized in the AIXI model. For instance, instead of defining as in [Hut05] one could treat the agent’s actions also as universally distributed random variables and then conditionalize on .

  3. Structure of AIXI. The algebraic properties and the structure of AIXI has barely been investigated. It is known that the value of AI is a linear function in and the value of AIXI is a convex function in , but this is neither very deep nor very specific to AIXI. It should be possible to extract all essentials from AIXI which finally should lead to an axiomatic characterization of AIXI. The benefit is as in any axiomatic approach: It would clearly exhibit the assumptions, separate the essentials from technicalities, simplify understanding and, most importantly, guide in finding proofs.

  4. Parameter dependence. The AIXI model depends on a few parameters: the choice of observation and action spaces and , the horizon , and the universal machine . So strictly speaking, AIXI is only (essentially) unique, if it is (essentially) independent of the parameters. I expect this to be true, but it has not been proven yet. The -dependence has been discussed in problem 4h. Countably infinite and would provide a rich enough interface for all problems, but even binary and are sufficient by sequentializing complex observations and actions. For special classes one could choose [Ber06]; unfortunately, the universal environment does not belong to any of these special classes. See [Hut05, Hut06a, LH07c] for some preliminary considerations.

7 Open Problems in Defining Intelligence

A fundamental and long standing difficultly in the field of artificial intelligence is that (generic) intelligence itself is not well defined. It is an anomaly that nowadays most AI researchers avoid discussing intelligence, which is caused by several factors: It is a difficult old subject, it is politically charged, it is not necessary for narrow AI which focusses on specific applications, AI research is done mainly by computer scientists who mainly care about algorithms rather than philosophical foundations, and the popular belief that general intelligence is principally unamenable to a mathematical definition. These reasons explain but only partially justify the low effort in trying to define intelligence.

Assume we had a definition, ideally a formal, objective, non-anthropocentric, and direct method of measuring intelligence, or at least a very general intelligence-like performance measure that could serve as an adequate substitute. This would bring the higher goals of the field into tight focus and allow us to objectively compare different approaches and judge the overall progress. Indeed, formalizing and rigorously defining a previously vague concept usually constitutes a quantum leap forward in the field: Cf. set theory, logical reasoning, infinitesimal calculus, energy, temperature, etc. Of course there is (some) work on defining [LH07a] and testing [LH07b] intelligence (see [LH07c] for a comprehensive list of references):

The famous Turing test [Tur50, SCA00, Loe90] involves human interaction, so is unfortunately informal and anthropocentric, others are large “messy” collections of existing intelligence tests [BS03, AABL02] (“shotgun” approaches), which are subjective and lack a clear theoretical grounding, and are potentially too narrow.

There are some more elegant solutions based on classical [Hor02] and algorithmic [Cha82] information theory (“C-Test” [HOMC98, HO00a, HO00b]), the latter closely related to Solomonoff’s [Sol64] “perfect” inductive inference model. The simple program in [SD03] reached good IQ scores on some of the more mathematical tests.

One limitation of the C-Test however is that it only deals with compression and (passive) sequence prediction, while humans or machines face reactive environments where they are able to change the state of the environment through their actions. AIXI generalizes Solomonoff to reactive environments, which suggested an extremely general, objective, fundamental, and formal performance measure [LH06, Leg08]. This so-called Intelligence Order Relation (IOR) [Hut05] even attracted the popular scientific press [GR05, Fié05], but the theory surrounding it has not yet been adequately explored. Here I only describe three non-technical open problems in defining intelligence.

  1. General and specific performance measures. Currently it is only partially understood how the IOR theoretically compares to the myriad of other tests of intelligence such as conventional IQ tests or even other performance tests proposed by AI other researchers. Another open question is whether the IOR might in some sense be too general. One may narrow the IOR to specific classes of problems [LH04b] and compare how the resulting IOR measures compare to standard performance measures for each problem class. This could shed light on aspects of the IOR and possibly also establish connections between seemingly unrelated performance metrics for different classes of problems.

  2. Practical performance measures. A more practically orientated line of investigation would be to produce a resource bounded version of the IOR like the one in [Hut05, Sec.7], or perhaps some of its special cases. This would allow one to define a practically implementable performance test, similar to the way in which the C-Test has been derived from incomputable definitions of compression using complexity [HO00a]. As there are many subtle kinds of resource bounded complexity [LV08], the advantages and disadvantages of each in this context would need to be carefully examined. Another possibility is the recent Speed Prior [Sch02b] or variants of this approach.

  3. Experimental evaluation. Once a computable version of the IOR had been defined, one could write a computer program that implements it. One could then experimentally explore its characteristics in a range of different problem spaces. For example, it might be possible to find correlations with IQ test scores when applied to humans, like has been done with the C-Test [HOMC98]. Another possibility would be to consider more limited domains like classification problems or sequence prediction problems and to see whether the relative performance of algorithms according to the IOR agrees with standard performance measures and real world performance.

A comprehensive collection, discussion and comparison of verbal and formal intelligence tests, definitions, and measures can be found in [LH07c].

8 Conclusions

The flavor of the open questions. While most of the key questions about universal sequence prediction have been solved, many key questions about universal AI remain open to date. The questions in Sections 4-7 are centered around the AIT approach to induction and AI, but many require interdisciplinary working. A more detailed account with technical details can be found in the book [Hut05] and paper [Hut07b]. Most questions are amenable to a rigorous mathematical treatment, including the more philosophically or vaguely sounding ones. Progress on the latter can achieved in the usual way by cycling through craft or improve mathematical definitions that resemble the intuitive concepts to be studied (e.g. “natural”, “generalization”, “optimal”), formulate or adapt a mathematical conjecture resembling the informal question, (dis)prove the conjecture. Some questions are about approximating, implementing, and testing various ideas and concepts. Technically, many questions are on (the interface between) and exploit techniques used in (algorithmic) information theory, machine learning, Bayesian statistics, (adaptive) control theory, and reinforcement learning.

Feasibility, difficulty, and interestingness of the open questions. I concentrated on questions whose answers probably help to develop the foundations of universal induction and UAI. Some problems are very hard, and their satisfactory solution worth a Nobel prize or Turing award, e.g. problem 5j

. I included those questions that looked promising and interesting at the time of writing this article. In the following I try to estimate their relative feasibility, difficulty, and interestingness:

These rankings hopefully do not mislead but give the interested reader some guidance where (not) to start. The final paragraphs of this article are devoted to the role UAI plays in the grand goal of AI.

Other approaches to AI. There are many fields that try to understand the phenomenon of intelligence and whose insights help in creating intelligent systems: Cognitive psychology and behaviorism [SMM07], philosophy of mind [Cha02, Sea05], neuroscience [HB04], linguistics [Hau01, Cho06], anthropology [Par07], machine learning [SB98, Bis06], logic [Tur84, Llo87], computer science [TTJ01, RN03], biological evolution [Kar07], and others. In computer science, most AI research is bottom-up; extending and improving existing or developing new algorithms and increasing their range of applicability; an interplay between experimentation on toy problems and theory, with occasional real-world applications. The agent perspective of AI [RN03] brings some order and unification in the large variety of problems the fields wants to address, but it is only a framework rather than a complete theory. In the absence of a perfect (stochastic) model of the environment, machine learning techniques are needed and employed. Apart from AIXI, there is no general theory for learning agents. This resulted in an ever increasing number of limited models and algorithms in the past.

The information-theoretic approach to AI. Solomonoff induction and AIXI are mathematical top-down approaches. The price for this generality is that the full models are computationally intractable, and investigations have to be mostly theoretical at this stage. From a different perspective, UAI strictly separates the conceptual and algorithmic AI questions. Two analogies may help: Von Neumann’s optimal minimax strategy [NM44] is a conceptual solution of zero-sum games, but is infeasible for most interesting zero-sum games. Nevertheless most algorithms are based on approximations of this ideal. In physics, the quest for a “theory of everything” (TOE) lead to extremely successful unified theories, despite their computational intractability [GSW00, Gre00]. The role of UAI in AI should be understood as analogous to the role of minimax in zero-sum games or of the TOE in physics.

Epilogue. As we have seen, algorithmic information theory offers answers to the following two key scientific questions: (1) The problem of induction, which is what science itself is mostly about: Induction finding regularities in data understanding the world science. (2) Understanding intelligence, the key property that distinguishes humans from animals and inanimate things.

This modern mathematical approach to both questions (1) and (2) is quite different to the more traditional philosophical, logic-based, engineering, psychological, or neurological approaches. Among the few other mathematical approaches, none captures rational intelligence as completely as the AIXI model does. Still, a lot of questions remain open. Raising and discussing them was the primary focus of this article.

Imagine a complete practical solution of the AI problem (by the next generation or so), i.e. systems that surpass human intelligence. This would transform society more than the industrial revolution two centuries ago, the computer last century, and the internet this century. Although individually, some questions I raised seem quite technical and narrow, they derive their significance from their role in a truly outstanding scientific endeavor. As with most innovations, the social benefit of course depends on its benevolent use.


  • [AABL02] N. Alvarado, S. Adams, S. Burbeck, and C. Latta. Beyond the turing test: Performance metrics for evaluating a computer simulation of the human mind. In Performance Metrics for Intelligent Systems Workshop, Gaithersburg, MD, USA, 2002. North-Holland.
  • [Asm84] E. Asmis. Epicurus’ Scientific Method. Cornell Univ. Press, 1984.
  • [ATA89a] R. Agrawal, D. Teneketzis, and V. Anantharam. Asymptotically efficient adaptive allocation schemes for controlled i.i.d. processes: Finite parameter space. IEEE Trans. Automatic Control, 34(3):258–266, 1989.
  • [ATA89b] R. Agrawal, D. Teneketzis, and V. Anantharam.

    Asymptotically efficient adaptive allocation schemes for controlled Markov chains: Finite parameter space.

    IEEE Trans. Automatic Control, 34(12):1249–1259, 1989.
  • [Bau99] E. B. Baum. Toward a model of intelligence as an economy of agents. Machine Learning, 35(2):155–185, 1999.
  • [Bay63] T. Bayes. An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society, 53:376–398, 1763. [Reprinted in Biometrika, 45, 296–315, 1958].
  • [BD02] P. J. Brockwell and R. A. Davis. Introduction to Time Series and Forecasting. Springer, 2nd edition, 2002.
  • [BDH99] C. Boutilier, T. Dean, and S. Hanks. Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1–94, 1999.
  • [Bel57] R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957.
  • [Ber93] J.O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, Berlin, 3rd edition, 1993.
  • [Ber06] D. P. Bertsekas. Dynamic Programming and Optimal Control, volume I and II. Athena Scientific, Belmont, MA, 3rd edition, 2006. Volumes 1 and 2.
  • [BF85] D. A. Berry and B. Fristedt. Bandit Problems: Sequential Allocation of Experiments. Chapman and Hall, London, 1985.
  • [Bis06] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
  • [BS03] S. Bringsjord and B. Schimanski. What is artificial intelligence? psychometric ai as an answer. Proc. 18th International Joint Conf. on Artificial Intelligence, 18:887–893, 2003.
  • [BSA83] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13:834–846, 1983.
  • [BT96] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996.
  • [Cal02] C. S. Calude. Information and Randomness: An Algorithmic Perspective. Springer, Berlin, 2nd edition, 2002.
  • [Car52] R. Carnap. The Continuum of Inductive Methods. University of Chicago Press, Chicago, 1952.
  • [CBL06] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
  • [CH05] A. Chernov and M. Hutter. Monotone conditional complexity bounds on future prediction errors. In Proc. 16th International Conf. on Algorithmic Learning Theory (ALT’05), volume 3734 of LNAI, pages 414–428, Singapore, 2005. Springer, Berlin.
  • [Cha66] G. J. Chaitin. On the length of programs for computing finite binary sequences. Journal of the ACM, 13(4):547–569, 1966.
  • [Cha82] G. J. Chaitin. Gödel’s theorem and information. International Journal of Theoretical Physics, 22:941–954, 1982.
  • [Cha87] G. J. Chaitin. Algorithmic Information Theory. Cambridge University Press, Cambridge, 1987.
  • [Cha02] D. J. Chalmers, editor. Philosophy of Mind: Classical and Contemporary Readings. Oxford University Press, USA, 2002.
  • [Cha03a] G. J. Chaitin. The Limits of Mathematics: A Course on Information Theory and the Limits of Formal Reasoning. Springer, Berlin, 2003.
  • [Cha03b] C. Chatfield. The Analysis of Time Series: An Introduction. Chapman & Hall / CRC, 6th edition, 2003.
  • [Cho06] N. Chomsky. Language and Mind. Cambridge University Press, 3rd edition, 2006.
  • [CHS07] A. Chernov, M. Hutter, and J. Schmidhuber. Algorithmic complexity bounds on future prediction errors. Information and Computation, 205(2):242–261, 2007.
  • [CT06] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Intersience, 2nd edition, 2006.
  • [CV05] R. Cilibrasi and P. M. B. Vitányi. Clustering by compression. IEEE Trans. Information Theory, 51(4):1523–1545, 2005.
  • [CV06] R. Cilibrasi and P. M. B. Vitányi. Similarity of objects and the meaning of words. In Proc. 3rd Annual Conferene on Theory and Applications of Models of Computation (TAMC’06), volume 3959 of LNCS, pages 21–45. Springer, 2006.
  • [DeG70] M. H. DeGroot. Optimal Statistical Decisions. McGraw-Hill, New York, 1970.
  • [Duf02] M. Duff.

    Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes

    PhD thesis, Department of Computer Science, University of Massachusetts Amherst, 2002.
  • [Ear93] J. Earman. Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. MIT Press, Cambridge, MA, 1993.
  • [Fer67] T. S. Ferguson. Mathematical Statistics: A Decision Theoretic Approach. Academic Press, New York, 3rd edition, 1967.
  • [FH06] R. Frigg and S. Hartmann. Models in science. Stanford Encyclopedia of Philosophy, 2006.
  • [Fié05] C. Fiévet. Mesurer l’intelligence d’une machine. In Le Monde de l’intelligence, volume 1, pages 42–45, Paris, November 2005. Mondeo publishing.
  • [Fra02] J. Franklin. The Science of Conjecture: Evidence and Probability before Pascal. Johns Hopkins University Press, 2002.
  • [Gag07] M. Gaglio. Universal search. Scholarpedia, 2(11):2575, 2007.
  • [Gei93] S. Geisser. Predictive Inference. Chapman & Hall/CRC, 1993.
  • [GKPV03] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman. Efficient solution algorithms for factored MDPs. Journal of Artificial Intelligence Research (JAIR), 19:399–468, 2003.
  • [GL89] Q. Gao and M. Li. The minimum description length principle and its application to online learning of handprinted characters. In Proc. 11th International Joint Conf. on Artificial Intelligence, pages 843–848, Detroit, MI, 1989.
  • [Gly80] C. Glymour. Theory and Evidence. Princeton Univ. Press, 1980.
  • [Gol67] E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
  • [Gol06] Michael Goldstein. Subjective bayesian analysis: Principles and practice. Bayesian Analysis, 1(3):403–420, 2006.
  • [Goo83] N. Goodman. Fact, Fiction, and Forecast. Harvard University Press, Cambridge, MA, 4th edition, 1983.
  • [GR05] D. Graham-Rowe. Spotting the bots with brains. In New Scientist magazine, volume 2512, page 27, 13 August 2005.
  • [Gre00] B. Greene. The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. Vintage Press, 2000.
  • [Grü07] P. D. Grünwald. The Minimum Description Length Principle. The MIT Press, Cambridge, 2007.
  • [GSW00] M. B. Green, J. H. Schwarz, and E. Witten. Superstring Theory: Volumes 1 and 2. Cambridge, 2000.
  • [GTV01] P. Gács, J. Tromp, and P. M. B. Vitányi. Algorithmic statistics. IEEE Transactions on Information Theory, 47(6):2443–2463, 2001.
  • [Hau01] R. Hausser. Foundations of Computational Linguistics: Human-Computer Communication in Natural Language. Springer, 2nd edition, 2001.
  • [HB04] J. Hawkins and S. Blakeslee. On Intelligence. Times Books, 2004.
  • [HLV07] M. Hutter, S. Legg, and P. M. B. Vitányi. Algorithmic probability. Scholarpedia, 2(8):2572, 2007.
  • [HM04] M. Hutter and An. A. Muchnik. Universal convergence of semimeasures on individual random sequences. In Proc. 15th International Conf. on Algorithmic Learning Theory (ALT’04), volume 3244 of LNAI, pages 234–248, Padova, 2004. Springer, Berlin.
  • [HM07] M. Hutter and An. A. Muchnik. On semimeasures predicting Martin-Löf random sequences. Theoretical Computer Science, 382(3):247–261, 2007.
  • [HO00a] J. Hernández-Orallo. Beyond the turing test. Journal of Logic, Language and Information, 9(4):447–466, 2000.
  • [HO00b] J. Hernández-Orallo. On the computational measurement of intelligence factors. In Performance Metrics for Intelligent Systems Workshop, pages 1–8, Gaithersburg, MD, USA, 2000.
  • [HOMC98] J. Hernández-Orallo and N. Minaya-Collado. A formal definition of intelligence based on an intensional variant of kolmogorov complexity. In International Symposium of Engineering of Intelligent Systems, pages 146–163, 1998.
  • [Hor02] J. Horst. A native intelligence metric for artificial systems. In Performance Metrics for Intelligent Systems Workshop, Gaithersburg, MD, USA, 2002.
  • [How03] C. Howson. Hume’s Problem: Induction and the Justification of Belief. Oxford University Press, 2nd edition, 2003.
  • [Hum39] D. Hume. A Treatise of Human Nature, Book I. [Edited version by L. A. Selby-Bigge and P. H. Nidditch, Oxford University Press, 1978], 1739.
  • [Hut00] M. Hutter. A theory of universal artificial intelligence based on algorithmic complexity. Technical Report cs.AI/0004001, München, 62 pages, 2000.
  • [Hut01a] M. Hutter. Convergence and error bounds for universal prediction of nonbinary sequences. In Proc. 12th European Conf. on Machine Learning (ECML’01), volume 2167 of LNAI, pages 239–250, Freiburg, 2001. Springer, Berlin.
  • [Hut01b] M. Hutter. General loss bounds for universal sequence prediction. In Proc. 18th International Conf. on Machine Learning (ICML’01), pages 210–217, Williamstown, MA, 2001. Morgan Kaufmann.
  • [Hut01c] M. Hutter. New error bounds for Solomonoff prediction. Journal of Computer and System Sciences, 62(4):653–667, 2001.
  • [Hut01d] M. Hutter. Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions. In Proc. 12th European Conf. on Machine Learning (ECML’01), volume 2167 of LNAI, pages 226–238, Freiburg, 2001. Springer, Berlin.
  • [Hut01e] M. Hutter. Universal sequential decisions in unknown environments. In Proc. 5th European Workshop on Reinforcement Learning (EWRL-5), volume 27, pages 25–26. Onderwijsinsituut CKI, Utrecht Univ., 2001.
  • [Hut02a] M. Hutter. The fastest and shortest algorithm for all well-defined problems. International Journal of Foundations of Computer Science, 13(3):431–443, 2002.
  • [Hut02b] M. Hutter. Self-optimizing and Pareto-optimal policies in general environments based on Bayes-mixtures. In

    Proc. 15th Annual Conf. on Computational Learning Theory (COLT’02)

    , volume 2375 of LNAI, pages 364–379, Sydney, 2002. Springer, Berlin.
  • [Hut03a] M. Hutter. Convergence and loss bounds for Bayesian sequence prediction. IEEE Transactions on Information Theory, 49(8):2061–2067, 2003.
  • [Hut03b] M. Hutter. On the existence and convergence of computable universal priors. In Proc. 14th International Conf. on Algorithmic Learning Theory (ALT’03), volume 2842 of LNAI, pages 298–312, Sapporo, 2003. Springer, Berlin.
  • [Hut03c] M. Hutter. An open problem regarding the convergence of universal a priori probability. In Proc. 16th Annual Conf. on Learning Theory (COLT’03), volume 2777 of LNAI, pages 738–740, Washington, DC, 2003. Springer, Berlin.
  • [Hut03d] M. Hutter. Optimality of universal Bayesian prediction for general loss and alphabet. Journal of Machine Learning Research, 4:971–1000, 2003.
  • [Hut03e] M. Hutter. Sequence prediction based on monotone complexity. In Proc. 16th Annual Conf. on Learning Theory (COLT’03), volume 2777 of LNAI, pages 506–521, Washington, DC, 2003. Springer, Berlin.
  • [Hut04a] M. Hutter. Bayes optimal agents in general environments. Technical report, Istituto Dalle Molle di Studi sull’Intelligenza Artificiale (IDSIA), 2004. unpublished manuscript.
  • [Hut04b] M. Hutter. Online prediction – Bayes versus experts. Technical report,, July 2004. Presented at the EU PASCAL Workshop on Learning Theoretic and Bayesian Inductive Principles (LTBIP’04).
  • [Hut05] M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin, 2005. 300 pages,
  • [Hut06a] M. Hutter. General discounting versus average reward. In Proc. 17th International Conf. on Algorithmic Learning Theory (ALT’06), volume 4264 of LNAI, pages 244–258, Barcelona, 2006. Springer, Berlin.
  • [Hut06b] M. Hutter. On generalized computable universal priors and their convergence. Theoretical Computer Science, 364(1):27–41, 2006.
  • [Hut06c] M. Hutter. On the foundations of universal sequence prediction. In Proc. 3rd Annual Conference on Theory and Applications of Models of Computation (TAMC’06), volume 3959 of LNCS, pages 408–420. Springer, 2006.
  • [Hut06d] M. Hutter. Sequential predictions based on algorithmic complexity. Journal of Computer and System Sciences, 72(1):95–117, 2006.
  • [Hut07a] M. Hutter. Algorithmic information theory: a brief non-technical guide to the field. Scholarpedia, 2(3):2519, 2007.
  • [Hut07b] M. Hutter. On universal prediction and Bayesian confirmation. Theoretical Computer Science, 384(1):33–48, 2007.
  • [Hut07c] M. Hutter. Universal algorithmic intelligence: A mathematical topdown approach. In Artificial General Intelligence, pages 227–290. Springer, Berlin, 2007.
  • [Hut08] M. Hutter. Algorithmic complexity. Scholarpedia, 3(1):2573, 2008.
  • [Hut09a] M. Hutter.

    Feature dynamic Bayesian networks.

    In Proc. 2nd Conf. on Artificial General Intelligence (AGI’09), volume 8, pages 67–73. Atlantis Press, 2009.
  • [Hut09b] M. Hutter. Feature Markov decision processes. In Proc. 2nd Conf. on Artificial General Intelligence (AGI’09), volume 8, pages 61–66. Atlantis Press, 2009.
  • [Jay03] E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, Cambridge, MA, 2003.
  • [Jef46] H. Jeffreys. An invariant form for the prior probability in estimation problems. In Proc. Royal Society London, volume Series A 186, pages 453–461, 1946.
  • [Jef83] R. C. Jeffrey. The Logic of Decision. University of Chicago Press, Chicago, IL, 2nd edition, 1983.
  • [Kar07] K. V. Kardong. An Introduction to Biological Evolution. McGraw-Hill Science/Engineering/Math, 2nd edition, 2007.
  • [Kem03] S. Kemp. Toward a monistic theory of science: The ‘strong programme’ reconsidered. Philosophy of the Social Sciences, 33(3):311–338, 2003.
  • [KK99] M. Kearns and D. Koller. Efficient reinforcement learning in factored MDPs. In Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 740–747, San Francisco, 1999. Morgan Kaufmann.
  • [KL93] E. Kalai and E. Lehrer. Rational learning leads to Nash equilibrium. Econometrica, 61(5):1019–1045, 1993.
  • [KLC98] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99–134, 1998.
  • [KLM96] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: a survey. Journal of Artificial Intelligence Research, 4:237–285, 1996.
  • [KLW06] S. H. Kellert, H. E. Longino, and C. K. Waters, editors. Scientific Pluralism. Univ. of Minnesota Press, 2006.
  • [Kol65] A. N. Kolmogorov. Three approaches to the quantitative definition of information. Problems of Information and Transmission, 1(1):1–7, 1965.
  • [KP00] D. Koller and R. Parr. Policy iteration for factored MDPs. In Proc. 16th Conference on Uncertainty in Artificial Intelligence (UAI-00), pages 326–334, San Francisco, CA, 2000. Morgan Kaufmann.
  • [KV86] P. R. Kumar and P. P. Varaiya. Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, Englewood Cliffs, NJ, 1986.
  • [KW96] R. E. Kass and L. Wasserman. The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435):1343–1370, 1996.
  • [Lap12] P. Laplace. Théorie analytique des probabilités. Courcier, Paris, 1812. [English translation by F. W. Truscott and F. L. Emory: A Philosophical Essay on Probabilities. Dover, 1952].
  • [Leg08] S. Legg. Machine Super Intelligence. PhD thesis, IDSIA, Lugano, 2008.
  • [Lev73] L. A. Levin. Universal sequential search problems. Problems of Information Transmission, 9:265–266, 1973.
  • [Lev74] I. Levi. Gambling with Truth: An Essay on Induction and the Aims of Science. MIT Press, 1974.
  • [Lev84] L. A. Levin. Randomness conservation inequalities: Information and independence in mathematical theories. Information and Control, 61:15–37, 1984.
  • [LH04a] S. Legg and M. Hutter. Ergodic MDPs admit self-optimising policies. Technical Report IDSIA-21-04, IDSIA, 2004.
  • [LH04b] S. Legg and M. Hutter. A taxonomy for abstract environments. Technical Report IDSIA-20-04, IDSIA, 2004.
  • [LH06] S. Legg and M. Hutter. A formal measure of machine intelligence. In Proc. 15th Annual Machine Learning Conference of Belgium and The Netherlands (Benelearn’06), pages 73–80, Ghent, 2006.
  • [LH07a] S. Legg and M. Hutter. A collection of definitions of intelligence. In B. Goertzel and P. Wang, editors, Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms, volume 157 of Frontiers in Artificial Intelligence and Applications, pages 17–24, Amsterdam, NL, 2007. IOS Press.
  • [LH07b] S. Legg and M. Hutter. Tests of machine intelligence. In 50 Years of Artificial Intelligence, volume 4850 of LNAI, pages 232–242, Monte Verita, Switzerland, 2007.
  • [LH07c] S. Legg and M. Hutter. Universal intelligence: A definition of machine intelligence. Minds & Machines, 17(4):391–444, 2007.
  • [Llo87] J. W. Lloyd.

    Foundations of Logic Programming

    Springer, 2nd edition, 1987.
  • [Loe90] H. Loebner. The loebner prize – the first turing test., 1990.
  • [LV07] M. Li and P. M. B. Vitányi. Applications of algorithmic information theory. Scholarpedia, 2(5):2658, 2007.
  • [LV08] M. Li and P. M. B. Vitányi. An Introduction to Kolmogorov Complexity and its Applications. Springer, Berlin, 3rd edition, 2008.
  • [LW89] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. In 30th Annual Symposium on Foundations of Computer Science, pages 256–261, Research Triangle Park, NC, 1989. IEEE.
  • [LZ76] A. Lempel and J. Ziv. On the complexity of finite sequences. IEEE Transactions on Information Theory, 22:75–81, 1976.
  • [MA93] A. W. Moore and C. G. Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning, 13:103–130, 1993.
  • [Mac03] D. J. C. MacKay. Information theory, inference and learning algorithms. Cambridge University Press, Cambridge, MA, 2003.
  • [Mah04] P. Maher. Probability captures the logic of scientific confirmation. In C. Hitchcock, editor, Contemporary Debates in Philosophy of Science, chapter 3, pages 69–93. Blackwell Publishing, 2004.
  • [Mic66] D. Michie. Game-playing and game-learning automata. In Advances in Programming and Non-Numerical Computation, pages 183–200. Pergamon, New York, 1966.
  • [MJ93] A. Milosavljevic̀ and J. Jurka. Discovery by minimal length encoding: A case study in molecular evolution. Machine Learning, 12:96–87, 1993.
  • [ML66] P. Martin-Löf. The definition of random sequences. Information and Control, 9(6):602–619, 1966.
  • [MP02] An. A. Muchnik and S. Y. Positselsky. Kolmogorov entropy in the context of computability theory. Theoretical Computer Science, 271(1–2):15–35, 2002.
  • [Mül06] M. Müller. Stationary algorithmic probability. Technical Report, TU Berlin, Berlin, 2006.
  • [NCD04] A. Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Autonomous inverted helicopter flight via reinforcement learning. In ISER, volume 21 of Springer Tracts in Advanced Robotics, pages 363–372. Springer, 2004.
  • [NM44] J. Von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, Princeton, NJ, 1944.
  • [OC06] T. Oates and W. Chong. Book review: Marcus Hutter, universal artificial intelligence, Springer (2004). Artificial Intelligence, 170(18):1222–1226, 2006.
  • [Pan08] S. Pankov. A computational approximation to the AIXI model. In Proc. 1st Conference on Artificial General Intelligence, volume 171, pages 256–267, 2008.
  • [Par95] J. B. Paris. The Uncertain Reasoner’s Companion: A Mathematical Perspective. Cambridge University Press, Cambridge, 1995.
  • [Par07] M. A. Park. Introducing Anthropology: An Integrated Approach. McGraw-Hill, 4th edition, 2007.
  • [Ped89] E. P. D. Pednault. Some experiments in applying inductive inference principles to surface reconstruction. In Proc. 11th International Joint Conf. on Artificial Intelligence, pages 1603–1609. San Mateo, CA, Morgan Kaufmann, 1989.
  • [PH04a] J. Poland and M. Hutter. Convergence of discrete MDL for sequential prediction. In Proc. 17th Annual Conf. on Learning Theory (COLT’04), volume 3120 of LNAI, pages 300–314, Banff, 2004. Springer, Berlin.
  • [PH04b] J. Poland and M. Hutter. On the convergence speed of MDL predictions for Bernoulli sequences. In Proc. 15th International Conf. on Algorithmic Learning Theory (ALT’04), volume 3244 of LNAI, pages 294–308, Padova, 2004. Springer, Berlin.
  • [PH05a] J. Poland and M. Hutter. Asymptotics of discrete MDL for online prediction. IEEE Transactions on Information Theory, 51(11):3780–3795, 2005.
  • [PH05b] J. Poland and M. Hutter. Defensive universal learning with experts. In Proc. 16th International Conf. on Algorithmic Learning Theory (ALT’05), volume 3734 of LNAI, pages 356–370, Singapore, 2005. Springer, Berlin.
  • [PH06a] J. Poland and M. Hutter. MDL convergence speed for Bernoulli sequences. Statistics and Computing, 16(2):161–175, 2006.
  • [PH06b] J. Poland and M. Hutter. Universal learning of repeated matrix games. In Proc. 15th Annual Machine Learning Conf. of Belgium and The Netherlands (Benelearn’06), pages 7–14, Ghent, 2006.
  • [Pop34] K. R. Popper. Logik der Forschung. Springer, Berlin, 1934. [English translation: The Logic of Scientific Discovery Basic Books, New York, 1959, and Hutchinson, London, revised edition, 1968].
  • [Pre02] S. J. Press. Subjective and Objective Bayesian Statistics: Principles, Models, and Applications. Wiley, 2nd edition, 2002.
  • [QR89] J. R. Quinlan and R. L. Rivest.

    Inferring decision trees using the minimum description length principle.

    Information and Computation, 80:227–248, 1989.
  • [Res01] N. Rescher. Paradoxes: Their Roots, Range, and Resolution. Open Court, Lanham, MD, 2001.
  • [RH06] D. Ryabko and M. Hutter. Asymptotic learnability of reinforcement problems with arbitrary dependence. In Proc. 17th International Conf. on Algorithmic Learning Theory (ALT’06), volume 4264 of LNAI, pages 334–347, Barcelona, 2006. Springer, Berlin.
  • [RH07] D. Ryabko and M. Hutter. On sequence prediction for arbitrary measures. In Proc. IEEE International Symposium on Information Theory (ISIT’07), pages 2346–2350, Nice, France, 2007. IEEE.
  • [RH08a] D. Ryabko and M. Hutter. On the possibility of learning in reactive environments with arbitrary dependence. Theoretical Computer Science, 405(3):274–284, 2008.
  • [RH08b] D. Ryabko and M. Hutter. Predicting non-stationary processes. Applied Mathematics Letters, 21(5):477–482, 2008.
  • [Ris78] J. J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465–471, 1978.
  • [Ris89] J. J. Rissanen. Stochastic Complexity in Statistical Inquiry. World Scientific, Singapore, 1989.
  • [RN03] S. J. Russell and P. Norvig. Artificial Intelligence. A Modern Approach. Prentice-Hall, Englewood Cliffs, NJ, 2nd edition, 2003.
  • [RPPCd08] S. Ross, J. Pineau, S. Paquet, and B. Chaib-draa. Online planning algorithms for POMDPs. Journal of Artificial Intelligence Research, 2008(32):663–704, 2008.
  • [Sal06] W. C. Salmon. Four Decades of Scientific Explanation. University of Pittsburgh Press, 2006.
  • [Sam59] A. L. Samuel. Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3:210–229, 1959.
  • [SB98] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
  • [SCA00] A. Saygin, I. Cicekli, and V. Akman. Turing test: 50 years later. Minds and Machines, 10, 2000.
  • [Sch97a] J. Schmidhuber. Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857–873, 1997.
  • [Sch97b] J. Schmidhuber. Low-complexity art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2):97–103, 1997.
  • [Sch02a] J. Schmidhuber. Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science, 13(4):587–612, 2002.
  • [Sch02b] J. Schmidhuber. The speed prior: A new simplicity measure yielding near-optimal computable predictions. In Proc. 15th Conf. on Computational Learning Theory (COLT’02), volume 2375 of LNAI, pages 216–228, Sydney, 2002. Springer, Berlin.
  • [Sch04] J. Schmidhuber. Optimal ordered problem solver. Machine Learning, 54(3):211–254, 2004.
  • [Sch05] J. Schmidhuber. Gödel machines: Self-referential universal problem solvers making provably optimal self-improvements. In Artificial General Intelligence. Springer, in press, 2005.
  • [SD03] P. Sanghi and D. L. Dowe. A computer program capable of passing i.q. tests. In Proc. 4th ICCS International Conf. on Cognitive Science (ICCS’03), pages 570–575, Sydney, NSW, Australia, 2003.
  • [SDL07] A. L. Strehl, C. Diuk, and M. L. Littman. Efficient structure learning in factored-state MDPs. In Proc. 27th AAAI Conference on Artificial Intelligence, pages 645–650, Vancouver, BC, 2007. AAAI Press.
  • [Sea05] J. R. Searle. Mind: A Brief Introduction. Oxford University Press, USA, 2005.
  • [SL08] I. Szita and A. Lörincz. The many faces of optimism: a unifying approach. In Proc. 12th International Conference (ICML 2008), volume 307, Helsinki, Finland, June 2008.
  • [SLJ03] S. Singh, M. Littman, N. Jong, D. Pardoe, and P. Stone. Learning predictive state representations. In Proc. 20th International Conference on Machine Learning (ICML’03), pages 712–719, 2003.
  • [SMM07] R. L. Solso, O. H. MacLin, and M. K. MacLin. Cognitive Psychology. Allyn & Bacon, 8th edition, 2007.
  • [Sol60] R. J. Solomonoff. A preliminary report on a general theory of inductive inference. Technical Report V-131, Zator Co., Cambridge, MA, 1960. Distributed at the Conference on Cerebral Systems and Computers, 8–11 Feb. 1960.
  • [Sol64] R. J. Solomonoff. A formal theory of inductive inference: Parts 1 and 2. Information and Control, 7:1–22 and 224–254, 1964.
  • [Sol78] R. J. Solomonoff. Complexity-based induction systems: Comparisons and convergence theorems. IEEE Transactions on Information Theory, IT-24:422–432, 1978.
  • [Sto01] D. Stork. Foundations of Occam’s razor and parsimony in learning. NIPS 2001 Workshop, 2001.
  • [Sut88] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.
  • [SZW97] J. Schmidhuber, J. Zhao, and M. A. Wiering. Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105–130, 1997.
  • [Tes94] G. Tesauro. “TD”-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215–219, 1994.
  • [TTJ01] A. Tettamanzi, M. Tomassini, and J. Jans̈sen. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems. Springer, 2001.
  • [Tur36] A. M. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proc. London Mathematical Society, 2(42):230–265, 1936.
  • [Tur50] A. M. Turing. Computing machinery and intelligence. Mind, 1950.
  • [Tur84] R. Turner. Logics for Artificial Intelligence. Ellis Horwood Series in Artificial Intelligence, 1984.
  • [Vit02] P. M. B. Vitányi. Meaningful information. Proc. 13th International Symposium on Algorithms and Computation (ISAAC’02), 2518:588–599, 2002.
  • [Vov92] V. G. Vovk. Universal forecasting algorithms. Information and Computation, 96(2):245–277, 1992.
  • [VV02] N. Vereshchagin and P. M. B. Vitányi. Kolmogorov’s structure functions with an application to the foundations of model selection. In Proc. 43rd Symposium on Foundations of Computer Science, pages 751–760, Vancouver, 2002.
  • [Wal05] C. S. Wallace. Statistical and Inductive Inference by Minimum Message Length. Springer, Berlin, 2005.
  • [Wat89] C. Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Oxford, 1989.
  • [WB68] C. S. Wallace and D. M. Boulton. An information measure for classification. Computer Journal, 11(2):185–194, 1968.
  • [WD92] C. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279–292, 1992.
  • [Wei00] G. Weiss, editor. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. MIT Press, 2000.
  • [Wik08] Wikipedia. Predictive modelling, 2008.
  • [WS98] M. A. Wiering and J. Schmidhuber. Fast online “Q”(). Machine Learning, 33(1):105–116, 1998.
  • [WSS99] M. A. Wiering, R. P. Salustowicz, and J. Schmidhuber. Reinforcement learning soccer teams with incomplete world models. Artificial Neural Networks for Robot Learning. Special issue of Autonomous Robots, 7(1):77–88, 1999.
  • [WST97] F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens. Reflections on the prize paper: The context-tree weighting method: Basic properties. IEEE Information Theory Society Newsletter, pages 20–27, 1997.
  • [ZL70] A. K. Zvonkin and L. A. Levin. The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 25(6):83–124, 1970.