Generate R.V.
An important Lemma.
Note: Unless otherwise stated, the algorithms and the corresponding screenshots are adopted from Robert and Casella (2005).

Box-Muller Algorithm

1
function boxmuller()
2
x = ones(2)
3
u = rand(2)
4
logu = sqrt(-2*log(u[1]))
5
x[1] = logu * cos(2*pi*u[2])
6
x[2] = logu * sin(2*pi*u[2])
7
return(x)
8
end
9
10
## example
11
boxmuller()
Copied!

Accept-Reject Method

Example:
f(x)exp(x2/2)(sin(6x)2+3cos(x)2sin(4x)2+1)f(x)\propto \exp(-x^2/2)(\sin(6x)^2+3\cos(x)^2\sin(4x)^2+1)
The Julia code is as follows:
1
function AccRej(f::Function, M)
2
## use normal distribution N(0, 1) as sampling function g
3
while true
4
x = randn()
5
u = rand()
6
cutpoint = f(x)/(M*g(x))
7
if u <= cutpoint
8
return([x, f(x)])
9
end
10
end
11
end
12
13
## density function of N(0, 1)
14
function g(x)
15
return(exp(-0.5*x^2)/sqrt(2*pi))
16
end
17
18
## example function and ignore the normalized constant
19
function f(x)
20
return(exp(-x^2/2)*(sin(6*x)^2 + 3*cos(x)^2*sin(4*x)^2 + 1))
21
end
22
23
## example
24
N = 500;
25
data = ones(N, 2);
26
for i = 1:500
27
data[i,:] = AccRej(f, sqrt(2*pi)*5)
28
end
Copied!

Envelope Accept-Reject

It is easy to write the Julia code:
1
function EnvAccRej(f::function, M, gl::function)
2
while true
3
x = randn() # still assume gm is N(0,1)
4
u = rand()
5
cutpoint1 = gl(x)/(M*g(x))
6
cutpoint2 = f(x)/(M*g(x))
7
if u <= cutpoint1
8
return(x)
9
elseif u <= cutpoint2
10
return(x)
11
end
12
end
13
end
Copied!

Atkinson's Poisson Simulation

It is necessary to note that the parameters in the algorithm are not same with those in the density function. In other words, the corresponding density function of the algorithm should be
f(x)=βexp(αβx)[1+exp(αβx)]2.f(x) = \beta \frac{\exp(\alpha-\beta x)}{[1+\exp(\alpha-\beta x)]^2}.
The following Julia code can generate the poisson random variable with respect to
λ\lambda
.
1
function AtkinsonPois(lambda)
2
# parameters
3
beta = pi/sqrt(3*lambda)
4
alpha = lambda*beta
5
c = 0.767 - 3.36/lambda
6
k = log(c) - lambda - log(beta)
7
# step 1: propose new x
8
u1 = rand()
9
while true
10
global x
11
x = (alpha - log((1-u1)/u1))/beta
12
x > -0.5 && break
13
end
14
while true
15
# step 2: transform to N
16
N = floor(Int, x)
17
u2 = rand()
18
# step 3: accept or not
19
lhs = alpha - beta*x + log(u2/(1+exp(alpha-beta*x))^2)
20
rhs = k + N*log(lambda) - log(factorial(lambda))
21
if lhs <= rhs
22
return(N)
23
end
24
end
25
end
26
27
## example
28
N = 100;
29
res = ones(Int, N);
30
for i = 1:N
31
res[i] = AtkinsonPois(10)
32
end
33
# ans: 8 9 13 10 12 .......
Copied!
As mentioned in the above exercise, another poisson generation method can be derived from the following exercise.
We can write the following Julia code to implement this generation procedure.
1
function SimplePois(lambda)
2
s = 0
3
k = 0
4
while true
5
u = rand()
6
x = -log(u)/lambda
7
s = s + x
8
if s > 1
9
return(k)
10
end
11
k = k + 1
12
end
13
end
14
15
## example for simple poisson
16
res2 = ones(Int, N);
17
for i = 1:N
18
res2[i] = SimplePois(10)
19
end
20
# ans: 5 7 16 7 10 .......
Copied!

ARS Algorithm

ARS is based on the construction of an envelope and the derivation of a corresponding Accept-Reject algorithm. It provides a sequential evaluation of lower and upper envelopes of the density
ff
when
h=logfh=\log f
is concave.
Let
Sn{\cal S}_n
be a set of points
xi,i=0,1,,n+1x_i,i=0,1,\ldots,n+1
, in the support of
ff
such that
h(xi)=logf(xi)h(x_i)=\log f(x_i)
is known up to the same constant. Given the concavity of
hh
, the line
Li,i+1L_{i,i+1}
through
(xi,h(xi))(x_i,h(x_i))
and
(xi+1,h(xi+1))(x_{i+1},h(x_{i+1}))
is below the graph of
hh
in
[xi,xi+1][x_i,x_{i+1}]
and is above this graph outside this interval.
For
x[xi,xi+1]x\in [x_i,x_{i+1}]
, define
hˉn(x)=min{Li1,i(x),Li+1,i+2(x)}andhn(x)=Li,i+1(x),\bar h_n(x)=\min\{L_{i-1,i}(x),L_{i+1,i+2}(x)\}\quad\text{and}\quad \underline h_n(x)=L_{i,i+1}(x)\,,
the envelopes are
hn(x)h(x)hˉn(x)\underline h_n(x)\le h(x)\le \bar h_n(x)
uniformly on the support of
ff
. Thus,
exphn(x)=fn(x)f(x)fˉn(x)=exphˉn(x)=wˉngn(x).\exp \underline h_n(x) = \underline f_n(x) \le f(x)\le \bar f_n(x)=\exp\bar h_n(x) = \bar w_ng_n(x)\,.
Davison (2008) provides another version of ARS
and gives an illustration example.
Let's consider the slightly simple verison in which we do not need to consider
h(y)h_-(y)
. It is obvious that
Consider the CDF of
g+=exp(h+)g_+=\exp(h_+)
:
G+(y)=yexp(h+(x))dx=min{z1,y}exp(h+(x))dx+z1min{z2,max{y,z1}}exp(h+(x))dx++zk1min{zk,max{y,zk1}}exp(h+(x))dx+zkmax{zk,y}exp(h+(x))dx\begin{aligned} G_+(y) & = \int_{-\infty}^y \exp(h_+(x)) dx \\ & = \int_{-\infty}^{\min\{z_1,y\}} \exp(h_+(x)) dx \\ & \qquad + \int_{z_1}^{\min\{z_2,\max\{y, z_1\}\}} \exp(h_+(x)) dx \\ & \qquad + \cdots \\ & \qquad + \int_{z_{k-1}}^{\min\{z_k,\max\{y, z_{k-1}\}\}} \exp(h_+(x)) dx \\ & \qquad + \int_{z_k}^{\max\{z_k,y\}} \exp(h_+(x)) dx \end{aligned}
and
zjzj+1exp(h+(x))dx=exp(h(yj+1))1h(yj+1)exp((yyj+1)h(yj+1))zjzj+1,  j=1,,k1\int_{z_{j}}^{z_{j+1}}\exp(h_+(x))dx = \exp({h(y_{j+1})})\cdot \frac{1}{h'(y_{j+1})}\exp((y-y_{j+1})h'(y_{j+1}))\mid_{z_j}^{z_{j+1}},\; j=1,\ldots,k-1
Then use the inverse of
G+(y)G_+(y)
to sample random variable whose density function is
g+(x)g_+(x)
. So we can use the following Julia program to sample from
g+(x)g_+(x)
.
1
# example 3.22 in Davison(2008)
2
r = 2; m = 10; mu = 0; sig2 = 1;
3
# range of y
4
yl = -2; yu = 2;
5
# function of h
6
function h(y)
7
return(r * y - m * log(1 + exp(y)) - (y - mu)^2 / (2 * sig2))
8
end
9
10
function h(y::Array)
11
return(r * y - m * log.(1 .+ exp.(y)) .- (y .- mu) .^2 ./ (2 * sig2))
12
end
13
14
# derivative of h
15
function dh(y)
16
return(r - m * exp(y) / (1 + exp(y)) - (y - mu) / sig2)
17
end
18
19
function dh(y::Array)
20
return(r .- m * exp.(y) ./ (1 .+ exp.(y)) .- (y .- mu)./sig2)
21
end
22
23
# intersection point
24
function zfix(yfixed::Array)
25
yf0 = yfixed[1:end-1]
26
yf1 = yfixed[2:end]
27
zfixed = yf0 .+ (h(yf0) .- h(yf1) .+ (yf1 .- yf0) .* dh(yf1)) ./ (dh(yf1) .- dh(yf0))
28
return(zfixed)
29
end
30
31
# evaluate log-density (not necessary)
32
function hplus(y::Float64, yfixed::Array)
33
zfixed = zfix(yfixed)
34
n = size(zfixed, 1)
35
for i = 1:n
36
if i == 1 && y < zfixed[i]
37
return(h(yfixed[i]) + (y - yfixed[i]) * dh(yfixed[i]))
38
elseif i < n && y >= zfixed[i] && y < zfixed[i+1]
39
return(h(yfixed[i+1]) + (y - yfixed[i+1]) * dh(yfixed[i+1]))
40
elseif i == n && y >= zfixed[n]
41
return(h(yfixed[i]) + (y - yfixed[i]) * dh(yfixed[i]))
42
end
43
end
44
end
45
46
# calculate G_+(z_i)
47
function gplus_cdf(yfixed::Array, zfixed::Array)
48
n = size(zfixed, 1)
49
s = zeros(n+1)
50
pr = zeros(n+1)
51
for i = 1:(n+1)
52
## integral from -infty to zi
53
if i == 1
54
# s[i] = exp(h(yfixed[i])) / dh(yfixed[i]) * (exp((zfixed[i]-yfixed[i]) * dh(yfixed[i])) - )
55
s[i] = exp(h(yfixed[i])) / dh(yfixed[i]) * (exp((zfixed[i]-yfixed[i]) * dh(yfixed[i])) - exp((yl-yfixed[i]) * dh(yfixed[i])))
56
elseif i == n+1
57
s[i] = exp(h(yfixed[i])) / dh(yfixed[i]) * (exp((yu-yfixed[i]) * dh(yfixed[i])) - exp((zfixed[n]-yfixed[i]) * dh(yfixed[i])))
58
else
59
s[i] = exp(h(yfixed[i])) / dh(yfixed[i]) * (exp((zfixed[i]-yfixed[i]) * dh(yfixed[i])) - exp((zfixed[i]-yfixed[i]) * dh(yfixed[i])))
60
end
61
end
62
pr = s / sum(s)
63
return cumsum(pr), sum(s)
64
end
65
66
# sample from gplus density
67
function gplus_sample(yfixed)
68
zfixed = zfix(yfixed)
69
gp = gplus_cdf(yfixed, zfixed)
70
zpr = gp[1]
71
norm_const = gp[2]
72
n = size(zfixed, 1)
73
u = rand()
74
# Invert the gplus pdf
75
for i = 1:n
76
if i == 1 && u < zpr[i]
77
ey = u * dh(yfixed[i]) * norm_const / exp(h(yfixed[i])) + exp((yl-yfixed[i])*dh(yfixed[i]))
78
return(yfixed[i] + log(ey)/dh(yfixed[i]))
79
elseif i == n && u >= zpr[i]
80
ey = (u - zpr[i]) * dh(yfixed[i+1]) * norm_const / exp(h(yfixed[i+1])) + exp((zfixed[i]-yfixed[i+1])*dh(yfixed[i+1]))
81
return(yfixed[i+1] + log(ey)/dh(yfixed[i+1]))
82
elseif i < n && u >= zpr[i] && u < zpr[i+1]
83
ey = (u - zpr[i]) * dh(yfixed[i+1]) * norm_const / exp(h(yfixed[i+1])) + exp((zfixed[i]-yfixed[i+1])*dh(yfixed[i+1]))
84
return(yfixed[i+1] + log(ey)/dh(yfixed[i+1]))
85
end
86
end
87
end
Copied!
Back to the main sampling algorithm, we can implement the procedure as follows:
1
## adaptive rejection sampling
2
function ars(yfixed::Array)
3
x = gplus_sample(yfixed)
4
u = rand()
5
if u <= exp(h(x)-hplus(x, yfixed))
6
return(x)
7
else
8
return(ars(append!(yfixed, x)))
9
end
10
end
11
12
## example
13
N = 100
14
res = ones(N);
15
for i = 1:N
16
res[i] = ars([-1.8,-1.1,-0.5,-0.2])
17
end
Copied!
Based on the ARS algorithm, we can also get the Supplemental ARS algorithm:

Exponential Random Variable

The inverse transform sampling from a uniform distribution can be easily used to sample an exponential random variable. The CDF is
F(x)=1exp(λx),F(x) = 1-\exp(-\lambda x)\,,
whose inverse function is
F1(y)=log(1y)λ.F^{-1}(y) = -\frac{\log(1-y)}{\lambda}\,.
Thus, we can sample
UU(0,1)U\sim U(0, 1)
, and then perform the transfomation
logUλ.-\frac{\log U}{\lambda}\,.
Today, I came across another method from Xi'an's blog, which points to the question on StackExchange.
Still confused about the details, as commented in the code.
The theoretical part is as follows,
I rewrite the provided C code in Julia, and compare the distribution with samples from inverse CDF and the package Distributions
1
a = [exp_rand() for i=1:1000]
2
b = [-log(rand()) for i=1:1000]
3
c = rand(Exponential(1), 1000)
4
histogram(a, bins=40, label = "sexp")
5
histogram!(b, bins=40, alpha=0.5, label = "invF")
6
histogram!(c, bins=40, alpha=0.5, label = "rand")
Copied!