Sunday, October 19, 2025

On Unbiased MCMC with couplings – Robin Ryder’s weblog


Pierre Jacob, John O’Leary and Yves Atchadé’s wonderful paper on Unbiased MCMC with couplings will likely be learn on the Royal Statistical Society tomorrow; Pierre has already introduced the paper on the Statisfaction weblog.
Though we received’t be current tomorrow, we’ve got learn it at size in our native studying group with Xian Robert and PhD college students Grégoire Clarté, Adrien Hairault and Caroline Lawless, and have submitted the next dialogue.

We congratulate the authors for this wonderful paper.

In “conventional” MCMC, it’s customary to verify that stationarity has been attained by operating a small variety of parallel chains, initiated at completely different beginning factors, to confirm that the ultimate distribution is unbiased of the initialization — although the one versus a number of chain(s) debate errupted from the beginning with Gelman and Rubin (1992) versus Geyer (1992).

As famous by the authors, a nasty selection of the preliminary distribution can result in poor properties. In essence, this happens and stays undetected for the present proposal as a result of the coupling of the chains happens lengthy earlier than the chain reaches stationarity. We wish to make two ideas to alleviate this situation, and therefore add a stationarity verify as a byproduct of the run.

  1. The chains X and Y have to have the identical preliminary distribution, however completely different pairs of chains on completely different parallel cores can afford completely different preliminary distributions. The ensuing estimator stays unbiased. We’d subsequently recommend that parallel chains be initiated from distributions which put weight on completely different elements of the parameter area. Concepts from the Quasi-Monte Carlo literature (see Gerber & Chopin 2015) may very well be used right here.
  2.  We additionally word that though the marginal distributions of X and Y have to be similar, any joint distribution on (X,Y) produces an unbiased algorithm. We’d recommend that it’s preferable that X and Y meet (shortly) after the chains have reached stationarity. Right here is one attainable technique to this finish: let p and p' be two distributions which put weight on completely different elements of the area, and Zsim Bernoulli(1/2). If Z=0, take X_0sim p and Y_0sim p', else take X_0sim p' and Y_0sim p. The marginal distribution of each X_0 and Y_0 is frac12(p+p'), however the two chains will begin in several elements of the parameter area and are more likely to meet after they’ve each reached stationarity.

The best algorithm is one which provides an accurate reply when it has converged, and a warning or error when it hasn’t. MCMC chains which haven’t but reached stationarity (for instance as a result of they haven’t discovered all modes of a multimodal distribution) might be arduous to detect. Right here, this situation is extra more likely to be detected since it will result in the coupling not occuring: mathbb E[tau] is giant, and it is a function, because it warns the practitioner that their kernel is ill-fitted to the goal density.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles