## Compressing parameters in Bayesian
high-order models with application to logistic sequence models

**Longhai Li,
Radford M. Neal
**
Bayesian classication and regression with high-order interactions
is largely infeasible because Markov chain Monte Carlo (MCMC) would
need to be applied with a great many parameters, whose number
increases rapidly with the order. In this paper we show how to make it
feasible by effectively reducing the number of parameters, exploiting
the fact that many interactions have the same values for all training
cases. Our method uses a single ``compressed'' parameter to represent
the sum of all parameters associated with a set of patterns that have
the same value for all training cases. Using symmetric stable
distributions as the priors of the original parameters, we can easily
find the priors of these compressed parameters. We therefore need to
deal only with a much smaller number of compressed parameters when
training the model with MCMC. After training the model, we can split
these compressed parameters into the original ones as needed to make
predictions for test cases. We show in detail how to compress
parameters for logistic sequence prediction models. Experiments on
both simulated and real data demonstrate that a huge number of
parameters can indeed be reduced by our compression method.

*Bayesian Analysis*,
vol. 3, pp. 793-822, posted online 2008-09-29:
pdf

You can also get the
software used for the tests in this paper.

**Associated reference:**
This is a revision of part of Longhai
Li's PhD thesis (also here).