## Regression and classification using Gaussian process priors

**Radford M. Neal ,
Dept. of Statistics and Dept. of Computer Science, University of Toronto**
Gaussian processes are a natural way of specifying prior
distributions over functions of one or more input variables. When
such a function defines the mean response in a regression model with
Gaussian errors, inference can be done using matrix computations,
which are feasible for datasets of up to about a thousand cases. The
covariance function of the Gaussian process can be given a
hierarchical prior, which allows the model to discover high-level
properties of the data, such as which inputs are relevant to
predicting the response. Inference for these covariance hyperparameters
can be done using Markov chain sampling. Classification models can be
defined using Gaussian processes for underlying latent values, which
can also be sampled within the Markov chain. Gaussian processes are
in my view the simplest and most obvious way of defining flexible
Bayesian regression and classification models, but despite some past
usage, they appear to have been rather neglected as a general-purpose
technique. This may be partly due to a confusion between the
properties of the function being modeled and the properties of the
best predictor for this unknown function.

Presented at the 6th Valencia international meeting on Bayesian Statistics,
published (with discussion) in J. M. Bernardo, *et al*
(editors) *Bayesian Statistics 6*, Oxford University Press, pp. 475-501:
postscript (without discussion),
pdf (without discussion),
associated software.

**Associated reference:**
Similar material appears in the following technical report:
Neal, R. M. (1997) ``Monte Carlo implementation of Gaussian process
models for Bayesian regression and classification'', Technical Report
No. 9702, Dept. of Statistics, University of Toronto, 24 pages:
abstract,
postscript, pdf,
associated software.