## Improving Asymptotic Variance of MCMC Estimators:
Non-reversible Chains are Better

**Radford M. Neal,
Dept. of Statistics and Dept. of Computer Science, University of Toronto**

I show how any reversible Markov chain
on a finite state space that is irreducible, and hence suitable for
estimating expectations with respect to its invariant distribution,
can be used to construct a non-reversible Markov chain on a related
state space that can also be used to estimate these expectations, with
asymptotic variance at least as small as that using the reversible
chain (typically smaller). The non-reversible chain achieves this
improvement by avoiding (to the extent possible) transitions that
backtrack to the state from which the chain just came. The proof that
this modification cannot increase the asymptotic variance of an MCMC
estimator uses a new technique that can also be used to prove Peskun's
(1973) theorem that modifying a reversible chain to reduce the
probability of staying in the same state cannot increase asymptotic
variance. A non-reversible chain that avoids backtracking will often
take little or no more computation time per transition than the
original reversible chain, and can sometime produce a large reduction
in asymptotic variance, though for other chains the improvement is
slight. In addition to being of some practical interest, this
construction demonstrates that non-reversible chains have a
fundamental advantage over reversible chains for MCMC estimation.
Research into better MCMC methods may therefore best be focused on
non-reversible chains.

Technical Report No. 0406, Dept. of Statistics, University of Toronto
(July 2004), 25 pages:
postscript,
pdf.

Also available
from arXiv.org.