the algorithm produces a uniformly ergodic chain (Theorem), and the expected acceptance probability associated with the algorithm is at least 1/M when the chain is stationary, and in that sense, the IMH is more efficient than the Accept-Reject algorithm.
Let's illustrate this algorithm with Ga(α,1). We have introduced how to sample from Gamma distribution via Accept-Reject algorithm in Special Distributions, and it is straightforward to get the Gamma Metropolis-Hastings based on the ratio of f/g,
And we can implement this algorithm with the Julia code:
function mh_gamma(T = 100, alpha = 1.5)
a = Int(floor(alpha))
b = a/alpha
x = ones(T+1) # initial value: 1
for t = 1:T
yt = rgamma_int(a, b)
rt = (yt / x[t] * exp((x[t] - yt) / alpha))^(alpha-a)
if rt >= 1
x[t+1] = yt
else
u = rand()
if u < rt
x[t+1] = yt
else
x[t+1] = x[t]
end
end
end
return(x)
end
To sample Ga(2.43,1), and estimate Ef(X2)=2.43+2.432=8.33.
# comparison with accept-reject
res = mh_gamma(5000, 2.43)[2:end]
est = cumsum(res.^2) ./ collect(1:5000)
res2 = ones(5000)
for i = 1:5000
res2[i] = rgamma(2.43, 1)
end
est2 = cumsum(res2.^2) ./ collect(1:5000)
using Plots
plot(est, label="Independent MH")
plot!(est2, label="Accept-Reject")
hline!([8.33], label="True value")
Logistic Regression
We observe (xi,yi),i=1,…,n according to the model
Choose the data-dependent value that makes Eα=α^, where α^ is the MLE of α, so b^=exp(α^+γ).
We can use MLE to estimate the coefficient in the logistical model (see my post for the derivation), and the following Julia code can help us fit the model quickly.
The estimates of the parameters are α^=15.0479,β^=−0.232163 and σ^β=0.108137.
Wrote the following Julia code to implement the independent MH algorithm,
## metropolis-hastings
using Distributions
γ = 0.57721
function ll(α::Float64, β::Float64)
a = exp.(α .+ β*temp)
return prod( (a ./ (1 .+ a) ).^failure .* (1 ./ (1 .+ a)).^(1 .- failure) )
end
function mh_logit(T::Int, α_hat::Float64, β_hat::Float64, σ_hat::Float64)
φ = Normal(β_hat, σ_hat)
π = Exponential(exp(α_hat+γ))
Α = ones(T)
Β = ones(T)
for t = 1:T-1
α = log(rand(π))
β = rand(φ)
r = ( ll(α, β) / ll(Α[t], Β[t]) ) * ( pdf(φ, β) / pdf(φ, Β[t]) )
if rand() < r
Α[t+1] = α
Β[t+1] = β
else
Α[t+1] = Α[t]
Β[t+1] = Β[t]
end
end
return Α, Β
end
Α, Β = mh_logit(10000, 15.04, -0.233, 0.108)
p1 = plot(Α, legend = false, xlab = "Intercept")
hline!([15.04])
p2 = plot(Β, legend = false, xlab = "Slope")
hline!([-0.233])
plot(p1, p2, layout = (1,2))
Saddlepoint tail area approximation
If K(τ)=log(Eexp(τX)) is the cumulant generating function, solving the saddlepoint equation K′(τ)=x yields the saddlepoint. For noncentral chi squared random variable X, the moment generating function is
ϕX(t)=(1−2t)p/2exp(2λt/(1−2t)),
where p is the number of degrees of freedom and λ is the noncentrality parameter, and its saddlepoint is
τ^(x)=4x−p+2x−p2+8λx.
The saddlepoint can be used to approximate the tail area of a distribution. We have the approximation