PBMCMC
This note is mainly based on Chapter 12 of Liu (2008).

Adaptive Direction Sampling (ADS): Snooker Algorithm

Conjugate Gradient Monte Carlo (CGMC)

where MTM refers to
and its simplified version orientational bias Monte Carlo (OBMC) when
w(x,y)=π(x)w(\mathbf x, \mathbf y)=\pi(\mathbf x)
:

Bimodal Example

whose contour plot is
For the general MH procedure:
1
function propose(x; a = 4)
2
θ = rand() * 2π
3
r = rand() * 4
4
return [x[1] + r*cos(θ), x[2] + r*sin(θ)]
5
end
6
7
function iMH(M = 200000)
8
# start points
9
x = [rand() - 0.5, rand() - 0.5]
10
X = ones(M, 2)
11
num = 0
12
for m = 1:M
13
xs = propose(x)
14
ρ = targetpdf(xs) / targetpdf(x)
15
if rand() < ρ
16
x = xs
17
num += 1
18
end
19
X[m,:] = x
20
end
21
return X, num/M
22
end
Copied!
For CGMC, there is a long way to go. Firstly, find out the derivative of target pdf and find the anchor point:
1
# derivative of targetpdf
2
function dm(x; step = 0.05, nmaxstep = 100)
3
term1 = w[1] / 2π * exp(-0.5*sum(x.^2))
4
term2 = w[2] / (2π * 0.19) * exp( -0.5 * transpose(x.+6) * invΣ2 * (x.+6) )
5
term3 = w[3] / (2π * 0.19) * exp( -0.5 * transpose(x.-4) * invΣ3 * (x.-4) )
6
term = term1 .+ term2 * invΣ2 + term3 * invΣ3
7
# negative of gradient direction (remove the minus symbol already)
8
dx1 = term[1,1]*x[1] + term[1,2]*x[2]
9
dx2 = term[2,1]*x[1] + term[2,2]*x[2]
10
dx = [dx1, dx2]
11
# find local maximum
12
bst = targetpdf(x)
13
i = 0
14
while i < nmaxstep
15
i += 1
16
cur = targetpdf(x .- step * dx)
17
if cur > bst
18
bst = cur
19
x = x .- step * dx
20
else
21
break
22
end
23
end
24
return x
25
end
Copied!
Then implement MTM algorithm for sampling radius
rr
:
1
function MTM(anchor, direction;M = 100, k = 5, σ = 10)
2
# use the simplified version: OBMC algorithm
3
# initial x
4
x = 2*rand()-1
5
num = 0
6
for m = 1:M
7
y = ones(k)
8
wy = ones(k)
9
for i = 1:k
10
y[i] = rand(Normal(x, σ))
11
wy[i] = f(y[i], anchor, direction)
12
end
13
#wy = wy ./ sum(wy)
14
yl = sample(y, pweights(wy))
15
# draw reference points
16
xprime = ones(k)
17
xprime[k] = x
18
wxprime = ones(k)
19
wxprime[k] = f(x, anchor, direction)
20
for i = 1:k-1
21
xprime[i] = rand(Normal(yl, σ))
22
wxprime[i] = f(xprime[i], anchor, direction)
23
end
24
# accept or not
25
ρ = sum(wy) / sum(wxprime)
26
if rand() < ρ
27
x = yl
28
num += 1
29
end
30
end
31
return x, num/M
32
end
Copied!
Finally, we can combine these sub-procedures:
1
function cgmc(M = 200000)
2
# start points (each col is a stream)
3
x = rand(2, 2) .- 0.5
4
X = ones(M, 4)
5
for m = 1:M
6
# randomly choose xa
7
xaIdx = sample([1,2])
8
xa = x[:,xaIdx]
9
# find gradient of pdf at xa (or conjugate gradient of log pdf at xa)
10
y = dm(xa)
11
xcIdx = 3 - xaIdx
12
xc = x[:,xcIdx]
13
e = y - xc
14
e = e / norm(e)
15
# sample r
16
r, = MTM(y, e)
17
# update xc
18
x[:,xcIdx] = y .+ r*e
19
X[m, :] .= vec(x)
20
end
21
return X
22
end
Copied!
Refer to bimodal-example.jl for the complete source code. Run the program, I can reproduce Fig 11.1 successfully,

Evolutionary Monte Carlo (EMC)

Last modified 2yr ago