2017年8月31日星期四

Review some Monte Carlo theorems

1.
Suppose the random variable U has a uniform (0,1) distribution.
Let F be a continuous distribution function. Then the random variable X = F^(-1)(U)
has distribution function F.

Note: F(x)=(x-a)/(b-a), for uniform random variable at [a,b] interval. Here F(u)=u.

2.Monte Carlo Integration

generate x1, x2,...,xn from uniform(a,b), then compute Yi = (b - a)g(Xi). Then mean Y is a consistent estimate of the integral

Note: 1. definite integral is a number.

3. Accept-Reject Generation Algorithm



2017年8月29日星期二

2017年8月26日星期六

Different likelihoods

Maximum Likelihood
Find β and θ that maximizes L(β, θ|data).
Partial Likelihood
If we can write the likelihood function as:
L(β, θ|data) = L1(β|data) L2(θ|data)
Then we simply maximize L1(β|data).
Profile Likelihood
If we can express θ as a function of β then we replace θ with the corresponding function.
Say, θ = g(β). Then, we maximize:
L(β, g(β)|data)
Marginal Likelihood
We integrate out θ from the likelihood equation by exploiting the fact that we can identify the probability distribution of θ conditional on β.

2017年8月13日星期日

Transform or link?

https://ecommons.cornell.edu/bitstream/handle/1813/31620/BU-1049-MA.pdf?sequence=1

2017年8月6日星期日

ranking and empirical distributions

In the absence of repeated values (ties), the cdf can be obtained computationally by sorting the observed data in ascending order, i.e., X_{s} = \{x_{(1)}, x_{(2)}, \ldots , x_{(N)}\}. Then F(x)=(n_x)/N, where (n_x) represents the ascending rank of x. Likewise, the p-value can be obtaining by sorting the data in descending order, and using a similar formula, P(X \geqslant x) = (\tilde{n}_x)/N, where (\tilde{n}_x) represents the descending rank of x.

https://brainder.org/2012/11/28/competition-ranking-and-empirical-distributions/