## Inference for Belief Networks Using Coupling From the Past

**Michael Harvey, Dept. of Computer Science, University of Toronto**

Radford M. Neal,
Dept. of Statistics and Dept. of Computer Science, University of Toronto
Inference for belief networks using Gibbs sampling
produces a distribution for unobserved variables that differs from the
correct distribution by a (usually) unknown error, since convergence
to the right distribution occurs only asymptotically. The method of
``coupling from the past'' samples from exactly the correct
distribution by (conceptually) running dependent Gibbs sampling
simulations from every possible starting state from a time far enough
in the past that all runs reach the same state at time *t*=0.
Explicitly considering every possible state is intractable for large
networks, however. We propose a method for layered noisy-or networks
that uses a compact, but often imprecise, summary of a set of states.
This method samples from exactly the correct distribution, and
requires only about twice the time per step as ordinary Gibbs
sampling, but it may require more simulation steps than would be
needed if chains were tracked exactly.

In C. Boutilier and M. Goldszmidt (editors),
*Uncertainty in Artificial Intelligence: Proceedings
of the Sixteenth Conference (2000)*, pp. 256-263:
postscript,
pdf.

**Associated reference:**
This is a condensation of Mike Harvey's MSc thesis on *Monte Carlo
Inference for Belief Networks Using Coupling From the Past*:
Postscript,
pdf.