Markov Chain
In the setup of MCMC algorithms, we construct Markov chains from a transition kernel
, a conditional probability density such that
.
The chain encountered in MCMC settings enjoy a very strong stability property, namely a stationary probability distribution; that is, a distribution
such that if
, then
, if the kernel
allows for free moves all over the state space. This freedom is called irreducibility in the theory of Markov chains and is formalized as the existence of
such that
for every
such that
. This property also ensures that most of the chains involved in MCMC algorithms are recurrent (that is, that the average number of visits to an arbitrary set
is infinite), or even Harris recurrent (that is, such that the probability of an infinite number of returns to
is 1).
Harris recurrence ensures that the chain has the same limiting behavior for every starting value instead of almost every starting value.
The stationary distribution is also a limiting distribution in the sense that the limiting distribution of
is
under the total variation norm, notwithstanding the initial value of
.
Strong forms of convergence are also encountered in MCMC settings, like geometric and uniform convergences.
If the marginals are proper, for convergence we only need our chain to be aperiodic. A sufficient condition is that
(or, equivalently,
) in a neighborhood of
.
If the marginal are not proper, or if they do not exist, then the chain is not positive recurrent. It is either null recurrent, and both cases are bad.
The detailed balance condition is not necessary for
to be a stationary measure associated with the transition kernel
, but it provides a sufficient condition that is often easy to check and that can be used for most MCMC algorithms.
Ergodicity: independence of initial conditions
- uniform ergodicity: stronger than geometric ergodicity in the sense that the rate of geometric convergence must be uniform over the whole space.
Irreducibility + Aperiodic = Ergodicity ?



The finite chain is indeed irreducible since it is possible to connect the status
and
in
steps with probability
The Bernoulli-Laplace chain is aperiodic and even strongly aperiodic since the diagonal terms satisfy
for every
.
Given the quasi-diagonal shape of the transition matrix, it is possible to directly determine the invariant distribution,
. From the equation
,
Thus,
and through normalization,
with
. It turns out that the hypergeometric distribution
is the invariant distribution for the Bernoulli-Laplace model.
A simple illustration of Markov chains on continuous state-space.
with
, and if the
's are independent,
is indeed independent from
conditionally on
.
- The Markovian properties of an AR(q) can be derived from.
- ARMA(p, q) doesn't fit in the Markovian framework.
Since
, consider the lower bound of the transition kernel (
):
satisfy
Given that the transition kernel corresponds to the
distribution, a normal distribution
is stationary for the AR(1) chain only if
These conditions imply that
and that
, which can only occur for
. In this case,
is indeed the unique stationary distribution of the AR(1) chain.

- Ifdoesn't have a constant term, i.e.,, then chainis necessarily transient since it is increasing.
- If, the probability of a return to 0 at timeis, which thus satisfies the recurrence equation. There exists a limitdifferent from 1, such that, iff; namely if. The chain is thus transient when the average number of siblings per individual is larger than 1. If there exists a restarting mechanism in 0,, it is easily shown that when, the number of returns to 0 follows a geometric distribution with parameter.

- If, the chain is recurrent.
Last modified 4yr ago