Elsevier

Theoretical Computer Science

Volume 384, Issue 1, 24 September 2007, Pages 33-48
Theoretical Computer Science

On universal prediction and Bayesian confirmation

https://doi.org/10.1016/j.tcs.2007.05.016Get rights and content
Under an Elsevier user license
open archive

Abstract

The Bayesian framework is a well-studied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or can fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. I discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. I show that Solomonoff’s model possesses many desirable properties: strong total and future bounds, and weak instantaneous bounds, and, in contrast to most classical continuous prior densities, it has no zero p(oste)rior problem, i.e. it can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.

Keywords

Sequence prediction
Bayes
Solomonoff prior
Kolmogorov complexity
Occam’s razor
Prediction bounds
Model classes
Philosophical issues
Symmetry principle
Confirmation theory
Black raven paradox
Reparametrization invariance
Old-evidence/updating problem
(non)Computable environments

References