Monte-Carlo
  • Introduction
  • AIS
  • Generate R.V.
    • Special Distribution
    • Copulas
    • Minimum of Two Exponential
  • Gibbs
    • Comparing two groups
    • Linear Regression
    • Simulation of Exp-Abs-xy
  • Markov Chain
  • MC Approximation
  • MC Integration
    • Rao-Blackwellization
  • MC Optimization
  • MCMC
    • MCMC diagnostics
  • Metropolis-Hastings
    • Metropolis
    • Independent MH
    • Random Walk MH
    • ARMS MH
  • PBMCMC
  • RJMCMC
  • Diagnosing Convergence
  • SMCTC
  • Tempering
    • Parallel Tempering
  • Misc
    • R vs. Julia
  • References
Powered by GitBook
On this page
  • Bayesian Model Choice
  • Green's algorithm
  • Fixed Dimension Reassessment
  • RJMCMC for change points
  • H Move
  • P Move
  • Birth Move
  • Death Move
  • Results
  • References

RJMCMC

PreviousPBMCMCNextDiagnosing Convergence

Last updated 5 years ago

Bayesian Model Choice

Green's algorithm

Fixed Dimension Reassessment

RJMCMC for change points

The log-likelihood function is

H Move

  1. the acceptance probability is

    Note that

    and

    so

    Then we have

P Move

  1. The acceptance probability is

    since

Birth Move

and define the perturbation to be such that

The prior ratio, becomes

the proposal ration becomes

and the Jacobian is

Death Move

The acceptance probability for the corresponding death step has the same form with the appropriate change of labelling of the variables, and the ratio terms inverted.

Results

The histogram of number of change points is

References

p(y∣x)=∑i=1nlog⁡{x(yi)}−∫0Lx(t)dt=∑j=1kmjlog⁡hj−∑j=0khj(sj+1−sj)\begin{aligned} p(y\mid x) &= \sum_{i=1}^n\log\{x(y_i)\}-\int_0^Lx(t)dt\\ &= \sum_{j=1}^km_j\log h_j-\sum_{j=0}^kh_j(s_{j+1}-s_j) \end{aligned}p(y∣x)​=i=1∑n​log{x(yi​)}−∫0L​x(t)dt=j=1∑k​mj​loghj​−j=0∑k​hj​(sj+1​−sj​)​

where mj=#{yi∈[sj,sj+1)}m_j=\#\{y_i\in[s_j,s_{j+1})\}mj​=#{yi​∈[sj​,sj+1​)}.

choose one of h0,h1,…,hkh_0,h_1,\ldots,h_kh0​,h1​,…,hk​ at random, obtaining hjh_jhj​

propose a change to hj′h_j'hj′​ such that log⁡(hj′/hj)∼U[−12,12]\log(h_j'/h_j)\sim U[-\frac 12, \frac 12]log(hj′​/hj​)∼U[−21​,21​].

α(hj,hj′)=p(hj′∣y)p(hj∣y)×J(hj∣hj′)J(hj′∣hj)=p(y∣hj′)p(y∣hj)×π(hj′)π(hj)×J(hj∣hj′)J(hj′∣hj) .\begin{aligned} \alpha(h_j, h_j') &=\frac{p(h_j'\mid y)}{p(h_j\mid y)}\times \frac{J(h_j\mid h_j')}{J(h_j'\mid h_j)}\\ &= \frac{p(y\mid h_j')}{p(y\mid h_j)}\times\frac{\pi(h_j')}{\pi(h_j)}\times\frac{J(h_j\mid h_j')}{J(h_j'\mid h_j)}\,. \end{aligned}α(hj​,hj′​)​=p(hj​∣y)p(hj′​∣y)​×J(hj′​∣hj​)J(hj​∣hj′​)​=p(y∣hj​)p(y∣hj′​)​×π(hj​)π(hj′​)​×J(hj′​∣hj​)J(hj​∣hj′​)​.​
log⁡hj′hj=u ,\log\frac{h_j'}{h_j}= u \,,loghj​hj′​​=u,

it follows that the CDF of hj′h_j'hj′​ is

F(x)=P(hj′≤x)=P(euh≤x)=P(u≤log⁡(x/h))=log⁡(x/h)+1/2F(x) = P(h_j'\le x) = P(e^uh\le x) = P(u\le \log(x/h)) = \log(x/h) + 1/2F(x)=P(hj′​≤x)=P(euh≤x)=P(u≤log(x/h))=log(x/h)+1/2
f(x)=F′(x)=1x ,f(x) = F'(x) = \frac{1}{x}\,,f(x)=F′(x)=x1​,
J(hj′∣hj)=1hj′ .J(h_j'\mid h_j) = \frac{1}{h_j'}\,.J(hj′​∣hj​)=hj′​1​.
α(hj,hj′)=p(y∣hj′)p(y∣hj)×(hj′)αexp⁡(−βhj′)hjαexp⁡(−βhj)=likelihood ratio×(hj′/hj)αexp⁡{−β(hj′−hj)}\begin{aligned} \alpha(h_j,h_j') &= \frac{p(y\mid h_j')}{p(y\mid h_j)}\times \frac{(h_j')^{\alpha}\exp(-\beta h_j')}{h_j^\alpha\exp(-\beta h_j)}\\ &=\text{likelihood ratio}\times (h_j'/h_j)^\alpha\exp\{-\beta(h_j'-h_j)\} \end{aligned}α(hj​,hj′​)​=p(y∣hj​)p(y∣hj′​)​×hjα​exp(−βhj​)(hj′​)αexp(−βhj′​)​=likelihood ratio×(hj′​/hj​)αexp{−β(hj′​−hj​)}​

Draw one of s1,s2,…,sks_1,s_2,\ldots,s_ks1​,s2​,…,sk​ at random, obtaining say sjs_jsj​.

Propose sj′∼U[sj−1,sj+1]s_j'\sim U[s_{j-1}, s_{j+1}]sj′​∼U[sj−1​,sj+1​]

α(sj,sj′)=p(y∣sj′)p(y∣sj)×π(sj′)π(sj)×J(sj∣sj′)J(sj′∣sj)=likelihood ratio×(sj+1−sj′)(sj′−sj−1)(sj+1−sj)(sj−sj−1)\begin{aligned} \alpha(s_j,s_j') &=\frac{p(y\mid s_j')}{p(y\mid s_j)}\times \frac{\pi(s_j')}{\pi(s_j)}\times \frac{J(s_j\mid s_j')}{J(s_j'\mid s_j)}\\ &=\text{likelihood ratio}\times \frac{(s_{j+1}-s_j')(s_j'-s_{j-1})}{(s_{j+1}-s_j)(s_j-s_{j-1})} \end{aligned}α(sj​,sj′​)​=p(y∣sj​)p(y∣sj′​)​×π(sj​)π(sj′​)​×J(sj′​∣sj​)J(sj​∣sj′​)​=likelihood ratio×(sj+1​−sj​)(sj​−sj−1​)(sj+1​−sj′​)(sj′​−sj−1​)​​
π(s1,s2,…,sk)=(2k+1)!L2k+1∏j=0k+1(sj+1−sj)\pi(s_1,s_2,\ldots,s_k)=\frac{(2k+1)!}{L^{2k+1}}\prod_{j=0}^{k+1}(s_{j+1}-s_j)π(s1​,s2​,…,sk​)=L2k+1(2k+1)!​j=0∏k+1​(sj+1​−sj​)

Choose a position s∗s^*s∗ uniformly distributed on [0,L][0,L][0,L], which must lie within an existing interval (sj,sj+1)(s_j,s_{j+1})(sj​,sj+1​) w.p 1.

Propose new heights hj′,hj+1′h_j', h_{j+1}'hj′​,hj+1′​ for the step function on the subintervals (sj,s∗)(s_j,s^*)(sj​,s∗) and (s∗,sj+1)(s^*,s_{j+1})(s∗,sj+1​). Use a weighted geometric mean for this compromise,

(s∗−sj)log⁡(hj′)+(sj+1−s∗)log⁡(hj+1′)=(sj+1−sj)log⁡(hj)(s^*-s_j)\log(h_j') + (s_{j+1}-s^*)\log(h_{j+1}')=(s_{j+1}-s_j)\log(h_j)(s∗−sj​)log(hj′​)+(sj+1​−s∗)log(hj+1′​)=(sj+1​−sj​)log(hj​)
hj+1′hj′=1−uu\frac{h_{j+1}'}{h_j'}=\frac{1-u}{u}hj′​hj+1′​​=u1−u​

with uuu drawn uniformly from [0,1][0,1][0,1].

p(k+1)p(k)2(k+1)(2k+3)L2(s∗−sj)∗(sj+1−s∗)sj+1−sj×βαΓ(α)(hj′hj+1′hj)α−1exp⁡{−β(hj′+hj+1′−h)}\frac{p(k+1)}{p(k)}\frac{2(k+1)(2k+3)}{L^2}\frac{(s^*-s_j)*(s_{j+1}-s^*)}{s_{j+1}-s_j}\times \frac{\beta^\alpha}{\Gamma(\alpha)}\Big(\frac{h_j'h_{j+1}'}{h_j}\Big)^{\alpha-1}\exp\{-\beta(h_j'+h_{j+1}'-h)\}p(k)p(k+1)​L22(k+1)(2k+3)​sj+1​−sj​(s∗−sj​)∗(sj+1​−s∗)​×Γ(α)βα​(hj​hj′​hj+1′​​)α−1exp{−β(hj′​+hj+1′​−h)}
dk+1Lbk(k+1)\frac{d_{k+1}L}{b_k(k+1)}bk​(k+1)dk+1​L​
(hj′+hj+1′)2hj\frac{(h_j'+h_{j+1}')^2}{h_j}hj​(hj′​+hj+1′​)2​

If sj+1s_{j+1}sj+1​ is removed, the new height over the interval (sj′,sj+1′)=(sj,sj+2)(s_j',s_{j+1}')=(s_j,s_{j+2})(sj′​,sj+1′​)=(sj​,sj+2​) is hj′h_j'hj′​, the weighted geometric mean satisfying

(sj+1−sj)log⁡(hj)+(sj+2−sj+1)log⁡(hj+1)=(sj+1′−sj′)log⁡(hj′)(s_{j+1}-s_j)\log(h_j) + (s_{j+2}-s_{j+1})\log(h_{j+1}) = (s_{j+1}'-s_j')\log(h_j')(sj+1​−sj​)log(hj​)+(sj+2​−sj+1​)log(hj+1​)=(sj+1′​−sj′​)log(hj′​)

And we can get the density plot of position (or height) conditional on the number of change points kkk, for example, the following plot is for the position when k=1,2,3k=1,2,3k=1,2,3

Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4), 711–732.
Sisson, S. A. (2005). Transdimensional Markov Chains: A Decade of Progress and Future Perspectives. Journal of the American Statistical Association, 100(471), 1077–1089.
Peter Green's Fortran program AutoRJ
David Hastie's C program AutoMix
Ai Jialin, Reversible Jump Markov Chain Monte Carlo Methods. MSc thesis, University of Leeds, Department of Statistics, 2011/12.