Gamma distribution
Robert and Casella (2013) presents the following algorithm:
If there exists a constant M M M such that
f ( x ) ≤ M g ( x ) , ∀ x ∈ s u p p f , f(x) \le Mg(x)\,,\qquad \forall x\in \mathrm{supp}\;f\,, f ( x ) ≤ M g ( x ) , ∀ x ∈ supp f , the algorithm produces a uniformly ergodic chain (Theorem ), and the expected acceptance probability associated with the algorithm is at least 1 / M 1/M 1/ M when the chain is stationary, and in that sense, the IMH is more efficient than the Accept-Reject algorithm.
Let's illustrate this algorithm with G a ( α , 1 ) {\mathcal G}a(\alpha, 1) G a ( α , 1 ) . We have introduced how to sample from Gamma distribution via Accept-Reject algorithm in Special Distributions , and it is straightforward to get the Gamma Metropolis-Hastings based on the ratio of f / g f/g f / g ,
And we can implement this algorithm with the Julia code:
Copy function mh_gamma (T = 100 , alpha = 1.5 )
a = Int ( floor (alpha))
b = a / alpha
x = ones (T + 1 ) # initial value: 1
for t = 1 : T
yt = rgamma_int (a , b)
rt = (yt / x[t] * exp ((x[t] - yt) / alpha)) ^ (alpha - a)
if rt >= 1
x[t + 1 ] = yt
else
u = rand ()
if u < rt
x[t + 1 ] = yt
else
x[t + 1 ] = x[t]
end
end
end
return (x)
end
To sample G a ( 2.43 , 1 ) {\mathcal G}a(2.43, 1) G a ( 2.43 , 1 ) , and estimate E f ( X 2 ) = 2.43 + 2.4 3 2 = 8.33 \mathrm{E}_f(X^2)=2.43+2.43^2=8.33 E f ( X 2 ) = 2.43 + 2.4 3 2 = 8.33 .
Copy # comparison with accept-reject
res = mh_gamma ( 5000 , 2.43 )[ 2 : end ]
est = cumsum (res .^ 2 ) ./ collect ( 1 : 5000 )
res2 = ones ( 5000 )
for i = 1 : 5000
res2[i] = rgamma ( 2.43 , 1 )
end
est2 = cumsum (res2 .^ 2 ) ./ collect ( 1 : 5000 )
using Plots
plot (est , label = "Independent MH" )
plot! (est2 , label = "Accept-Reject" )
hline! ([ 8.33 ] , label = "True value" )
Logistic Regression
We observe ( x i , y i ) , i = 1 , … , n (x_i,y_i),i=1,\ldots,n ( x i , y i ) , i = 1 , … , n according to the model
Y i ∼ B e r n o u l l i ( p ( x i ) ) , p ( x ) = exp ( α + β x ) 1 + exp ( α + β x ) . Y_i\sim\mathrm{Bernoulli}(p(x_i))\,,\qquad p(x) = \frac{\exp(\alpha+\beta x)}{1+\exp(\alpha+\beta x)}\,. Y i ∼ Bernoulli ( p ( x i )) , p ( x ) = 1 + exp ( α + β x ) exp ( α + β x ) . The likelihood is
L ( α , β ∣ y ) ∝ ∏ i = 1 n ( exp ( α + β x i ) 1 + exp ( α + β x i ) ) y i ( 1 1 + exp ( α + β x i ) ) 1 − y i L(\alpha,\beta\mid \mathbf y) \propto \prod_{i=1}^n \Big(\frac{\exp(\alpha+\beta x_i)}{1+\exp(\alpha+\beta x_i)}\Big)^{y_i}\Big(\frac{1}{1+\exp(\alpha+\beta x_i)}\Big)^{1-y_i} L ( α , β ∣ y ) ∝ i = 1 ∏ n ( 1 + exp ( α + β x i ) exp ( α + β x i ) ) y i ( 1 + exp ( α + β x i ) 1 ) 1 − y i and let π ( e α ) ∼ E x p ( 1 / b ) \pi(e^\alpha)\sim \mathrm{Exp}(1/b) π ( e α ) ∼ Exp ( 1/ b ) and put a flat prior on β \beta β , i.e.,
π α ( α ∣ b ) π β ( b ) = 1 b e − e α / b d e α d β = 1 b e α e − e α / b d α d β . \pi_\alpha(\alpha\mid b)\pi_\beta(b) = \frac 1b e^{-e^\alpha/b}de^\alpha d\beta=\frac 1b e^\alpha e^{-e^\alpha/b}d\alpha d\beta\,. π α ( α ∣ b ) π β ( b ) = b 1 e − e α / b d e α d β = b 1 e α e − e α / b d α d β . Note that
E [ α ] = ∫ − ∞ ∞ α b e α e − e α / b d α = ∫ 0 ∞ log w 1 b e − w / b = log b − γ , \begin{aligned}
\mathrm{E}[\alpha] &= \int_{-\infty}^\infty \frac{\alpha}{b}e^\alpha e^{-e^\alpha/b}d\alpha\\
&=\int_0^\infty \log w\frac 1b e^{-w/b} \\
&=\log b -\gamma\,,
\end{aligned} E [ α ] = ∫ − ∞ ∞ b α e α e − e α / b d α = ∫ 0 ∞ log w b 1 e − w / b = log b − γ , where
γ = − ∫ 0 ∞ e − x log x d x \gamma = -\int_0^\infty e^{-x}\log xdx γ = − ∫ 0 ∞ e − x log x d x is the Euler's Constant .
Choose the data-dependent value that makes E α = α ^ \mathrm{E}\alpha=\hat\alpha E α = α ^ , where α ^ \hat \alpha α ^ is the MLE of α \alpha α , so b ^ = exp ( α ^ + γ ) \hat b=\exp(\hat \alpha+\gamma) b ^ = exp ( α ^ + γ ) .
We can use MLE to estimate the coefficient in the logistical model (see my post for the derivation), and the following Julia code can help us fit the model quickly.
Copy using DataFrames , GLM , Plots
temp = [ 53 , 57 , 58 , 63 , 66 , 67 , 67 , 67 , 68 , 69 , 70 , 70 , 70 , 70 , 72 , 73 , 75 , 75 , 76 , 76 , 78 , 79 , 81 ]
failure = [ 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ]
data = DataFrame (temp = temp , failure = failure)
logit_fit = glm ( @formula (failure ~ temp) , data , Binomial () , LogitLink ())
plot (temp , predict (logit_fit) , legend = false , xlabel = "Temperature" , ylab = "Probability" )
scatter! (temp , predict (logit_fit))
The estimates of the parameters are α ^ = 15.0479 , β ^ = − 0.232163 \hat\alpha=15.0479, \hat\beta=-0.232163 α ^ = 15.0479 , β ^ = − 0.232163 and σ ^ β = 0.108137 \hat\sigma_\beta = 0.108137 σ ^ β = 0.108137 .
Wrote the following Julia code to implement the independent MH algorithm,
Copy ## metropolis-hastings
using Distributions
γ = 0.57721
function ll (α :: Float64 , β :: Float64 )
a = exp .(α .+ β * temp)
return prod ( (a ./ ( 1 .+ a) ) .^ failure .* ( 1 ./ ( 1 .+ a)) .^ ( 1 .- failure) )
end
function mh_logit (T :: Int , α_hat :: Float64 , β_hat :: Float64 , σ_hat :: Float64 )
φ = Normal (β_hat , σ_hat)
π = Exponential ( exp (α_hat + γ))
Α = ones (T)
Β = ones (T)
for t = 1 : T - 1
α = log ( rand (π))
β = rand (φ)
r = ( ll (α , β) / ll (Α[t] , Β[t]) ) * ( pdf (φ , β) / pdf (φ , Β[t]) )
if rand () < r
Α[t + 1 ] = α
Β[t + 1 ] = β
else
Α[t + 1 ] = Α[t]
Β[t + 1 ] = Β[t]
end
end
return Α , Β
end
Α , Β = mh_logit ( 10000 , 15.04 , - 0.233 , 0.108 )
p1 = plot (Α , legend = false , xlab = "Intercept" )
hline! ([ 15.04 ])
p2 = plot (Β , legend = false , xlab = "Slope" )
hline! ([ - 0.233 ])
plot (p1 , p2 , layout = ( 1 , 2 ))
Saddlepoint tail area approximation
If K ( τ ) = log ( E exp ( τ X ) ) K(\tau) = \log({\mathbb E}\exp(\tau X)) K ( τ ) = log ( E exp ( τ X )) is the cumulant generating function , solving the saddlepoint equation K ′ ( τ ) = x K'(\tau)=x K ′ ( τ ) = x yields the saddlepoint. For noncentral chi squared random variable X X X , the moment generating function is
ϕ X ( t ) = exp ( 2 λ t / ( 1 − 2 t ) ) ( 1 − 2 t ) p / 2 , \phi_X(t) = \frac{\exp(2\lambda t/(1-2t))}{(1-2t)^{p/2}}\,, ϕ X ( t ) = ( 1 − 2 t ) p /2 exp ( 2 λ t / ( 1 − 2 t )) , where p p p is the number of degrees of freedom and λ \lambda λ is the noncentrality parameter, and its saddlepoint is
τ ^ ( x ) = − p + 2 x − p 2 + 8 λ x 4 x . \hat\tau(x) = \frac{-p+2x-\sqrt{p^2+8\lambda x}}{4x}\,. τ ^ ( x ) = 4 x − p + 2 x − p 2 + 8 λ x . The saddlepoint can be used to approximate the tail area of a distribution. We have the approximation
P ( X ˉ > a ) = ∫ a ∞ ( n 2 π K X ′ ′ ( τ ^ ( x ) ) ) 1 / 2 exp { n [ K X ( τ ^ ( x ) ) − τ ^ ( x ) x ] } d x = ∫ τ ^ ( a ) 1 / 2 ( n 2 π ) 1 / 2 [ K X ′ ′ ( t ) ] 1 / 2 exp { n [ K X ( t ) − t K X ′ ( t ) ] } d t ≈ 1 m ∑ i = 1 m I [ Z i > τ ^ ( a ) ] , \begin{aligned}
P(\bar X>a) &= \int_a^\infty \Big(\frac{n}{2\pi K_X''(\hat\tau(x))}\Big)^{1/2}\exp\{n[K_X(\hat\tau(x))-\hat\tau(x)x]\}dx\\
&= \int_{\hat\tau(a)}^{1/2} \Big(\frac{n}{2\pi}\Big)^{1/2}[K_X''(t)]^{1/2}\exp\{n[K_X(t)-tK_X'(t)]\}dt\\
&\approx \frac 1m\sum_{i=1}^m\mathbb{I}[Z_i>\hat \tau(a)]\,,
\end{aligned} P ( X ˉ > a ) = ∫ a ∞ ( 2 π K X ′′ ( τ ^ ( x )) n ) 1/2 exp { n [ K X ( τ ^ ( x )) − τ ^ ( x ) x ]} d x = ∫ τ ^ ( a ) 1/2 ( 2 π n ) 1/2 [ K X ′′ ( t ) ] 1/2 exp { n [ K X ( t ) − t K X ′ ( t )]} d t ≈ m 1 i = 1 ∑ m I [ Z i > τ ^ ( a )] , where Z i Z_i Z i is the sample from the saddlepoint distribution. Using a Taylor series approximation,
exp { n [ K X ( t ) − t K X ′ ( t ) ] } ≈ exp { − n K X ′ ′ ( 0 ) t 2 2 } , \exp\{n[K_X(t)-tK_X'(t)]\}\approx \exp\Big\{-nK''_X(0)\frac{t^2}{2}\Big\}\,, exp { n [ K X ( t ) − t K X ′ ( t )]} ≈ exp { − n K X ′′ ( 0 ) 2 t 2 } , so the instrumental density can be chosen as N ( 0 , 1 n K X ′ ′ ( 0 ) ) N(0,\frac{1}{nK_X''(0)}) N ( 0 , n K X ′′ ( 0 ) 1 ) , where
K X ′ ′ ( t ) = 2 [ p ( 1 − 2 t ) + 4 λ ] ( 1 − 2 t ) 3 . K_X''(t)=\frac{2[p(1-2t) + 4\lambda]}{(1-2t)^3}\,. K X ′′ ( t ) = ( 1 − 2 t ) 3 2 [ p ( 1 − 2 t ) + 4 λ ] . I implemented the independent MH algorithm via the following code to produce random variables from the saddlepoint distribution.
Copy using Distributions
p = 6
λ = 9
function K (t)
return 2 λ * t / ( 1 - 2 t) - p / 2 * log ( 1 - 2 t)
end
function D1K (t)
return ( 4 λ * t ) / ( 1 - 2 t) ^ 2 + ( 2 λ + p ) / ( 1 - 2 t )
end
function D2K (t)
return 2 (p * ( 1 - 2 t) + 4 λ) / ( 1 - 2 t) ^ 3
end
function logf (n , t)
return n * ( K (t) - t * D1K (t)) + 0.5 log ( D2K (t))
end
function logg (n , t)
return - 1 n * D2K ( 0 ) * t ^ 2 / 2
end
function mh_saddle (T :: Int = 10000 ; n :: Int = 1 )
Z = zeros (T)
g = Normal ( 0 , 1 / sqrt (n * D2K ( 0 )))
for t = 1 : T - 1
z = rand (g)
logr = logf (n , z) - logf (n , Z[t]) + logg (n , Z[t]) - logg (n , z)
if log ( rand ()) < logr
Z[t + 1 ] = z
else
Z[t + 1 ] = Z[t]
end
end
return Z
end
Z = mh_saddle (n = 1 )
function tau (x)
return ( - 1 p + 2 x - sqrt (p ^ 2 + 8 λ * x) ) / ( 4 x)
end
println ( sum (Z .> tau ( 36.225 )) / 10000 )
println ( sum (Z .> tau ( 40.542 )) / 10000 )
println ( sum (Z .> tau ( 49.333 )) / 10000 )
I can successfully reproduce the results.
If n = 1 n=1 n = 1 ,
( 36.225 , ∞ ) (36.225,\infty) ( 36.225 , ∞ )
( 40.542 , ∞ ) (40.542,\infty) ( 40.542 , ∞ )
( 49.333 , ∞ ) (49.333,\infty) ( 49.333 , ∞ )
For n = 100 n=100 n = 100 , note that if X i ∼ χ p 2 ( λ ) X_i\sim \chi^2_p(\lambda) X i ∼ χ p 2 ( λ ) , then n X ˉ = ∑ X i ∼ χ n p 2 ( n λ ) n\bar X=\sum X_i\sim \chi^2_{np}(n\lambda) n X ˉ = ∑ X i ∼ χ n p 2 ( nλ ) , then we can use the following R code to figure out the cutpoints.
Copy qchisq ( 0.90 , 600 , 1800 ) / 100
qchisq ( 0.95 , 600 , 1800 ) / 100
qchisq ( 0.99 , 600 , 1800 ) / 100
Then use these cutpoints to estimate the probability by using samples produced from independent MH algorithm.
1 - pchisq(x*100, 6*100, 9*2*100)
( 25.18054 , ∞ ) (25.18054,\infty) ( 25.18054 , ∞ )
( 25.52361 , ∞ ) (25.52361,\infty) ( 25.52361 , ∞ )
( 26.17395 , ∞ ) (26.17395,\infty) ( 26.17395 , ∞ )