http://rpubs.com/enwuliu/548305
This is R code for the example from the paper "That Blup Is a Good Thing: The Estimation of Random Effects.".
Robinson, George K. "That Blup Is a Good Thing: The Estimation of Random Effects." Statistical Science 6, no. 1 (1991): 15-32.
In the book Generalized Additive Models An Introduction with R, the second version, page 81 the function llm that solved the linear mixed model also used Henderson's equations but did not mention the equations.
mixed_model_blup.R
enliu
2019-11-10
Y<-matrix(c(110,100,110,100,100,110,110,100,100),nrow=9,ncol=1)
X<-matrix(c(1,1,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1),nrow=9,ncol=3)
Z<-matrix(c(1,0,0,0,0,0,0,0,0,
0,0,1,0,0,0,0,0,0,
0,0,0,0,0,1,1,0,0,
0,1,0,1,1,0,0,1,1),nrow=9,ncol=4)
R<-diag(1,nrow=9,ncol=9)
G<-diag(0.1,nrow=4,ncol=4)
A<-t(X)%*%solve(R)%*%X
B<-t(X)%*%solve(R)%*%Z
C<-t(Z)%*%solve(R)%*%X
D<-t(Z)%*%solve(R)%*%Z+solve(G)
E<-cbind(A,B)
F<-cbind(C,D)
M<-rbind(E,F)
t(X)%*%solve(R)%*%Y
## [,1]
## [1,] 210
## [2,] 310
## [3,] 420
t(Z)%*%solve(R)%*%Y
## [,1]
## [1,] 110
## [2,] 110
## [3,] 220
## [4,] 500
N<-rbind(t(X)%*%solve(R)%*%Y,t(Z)%*%solve(R)%*%Y)
solve(M,N)
## [,1]
## [1,] 105.6386574
## [2,] 104.2757378
## [3,] 105.4584366
## [4,] 0.3964857
## [5,] 0.5203875
## [6,] 0.7569272
## [7,] -1.6738004
/*********Another method from the Generalized Additive Models An Introduction with R, the second version, p81
llm <- function(theta,X,Z,y) {
## untransform parameters...
sigma.b <- exp(theta[1])
sigma <- exp(theta[2])
## extract dimensions...
n <- length(y); pr <- ncol(Z); pf <- ncol(X)
## obtain \hat \beta, \hat b...
X1 <- cbind(X,Z)
ipsi <- c(rep(0,pf),rep(1/sigma.b^2,pr))
b1 <- solve(crossprod(X1)/sigma^2+diag(ipsi),t(X1)%*%y/sigma^2)
## compute log|Z'Z/sigma^2 + I/sigma.b^2|...
ldet <- sum(log(diag(chol(crossprod(Z)/sigma^2 +
diag(ipsi[-(1:pf)])))))
## compute log profile likelihood...
l <- (-sum((y-X1%*%b1)^2)/sigma^2 - sum(b1^2*ipsi) -n*log(sigma^2) - pr*log(sigma.b^2) - 2*ldet - n*log(2*pi))/2
attr(l,"b") <- as.numeric(b1) ## return \hat beta and \hat b
-l
}
y<-matrix(c(110,100,110,100,100,110,110,100,100),nrow=9,ncol=1)
x<-matrix(c(1,1,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1),nrow=9,ncol=3)
z<-matrix(c(1,0,0,0,0,0,0,0,0,
0,0,1,0,0,0,0,0,0,
0,0,0,0,0,1,1,0,0,
0,1,0,1,1,0,0,1,1),nrow=9,ncol=4)
X1 <- cbind(x,z)
crossprod(X1)
## [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## [1,] 2 0 0 1 0 0 1
## [2,] 0 3 0 0 1 0 2
## [3,] 0 0 4 0 0 2 2
## [4,] 1 0 0 1 0 0 0
## [5,] 0 1 0 0 1 0 0
## [6,] 0 0 2 0 0 2 0
## [7,] 1 2 2 0 0 0 5
n <- length(y); pr <- ncol(z); pf <- ncol(x)
theta<-c(log(sqrt(0.1)),log(1))
sigma.b <- exp(theta[1])
sigma <- exp(theta[2])
ipsi <- c(rep(0,pf),rep(1/sigma.b^2,pr))
llm(theta,x,z,y)
## [1] 92.34967
## attr(,"b")
## [1] 105.6386574 104.2757378 105.4584366 0.3964857 0.5203875 0.7569272
## [7] -1.6738004
Every time I think I know what's going on, suddenly there's another layer of complications.
2019年11月9日星期六
crossprod function
http://www.endmemo.com/program/R/crossprod.php
R crossprod Function
crossprod()
function returns matrix cross-product.crossprod(x, y = NULL) tcrossprod(x, y = NULL)
x
: numeric matrixy
: numeric matrix, if y=NULL, y is the same as x...
> x <- matrix(1:9,3,3) > x
[,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9
> crossprod(x)
[,1] [,2] [,3] [1,] 14 32 50 [2,] 32 77 122 [3,] 50 122 194
> tcrossprod(x)
[,1] [,2] [,3] [1,] 66 78 90 [2,] 78 93 108 [3,] 90 108 126
2019年11月7日星期四
2019年11月4日星期一
Output variance -covariance matrix from Genmod
proc genmod data=outmi; model Oxygen= RunTime RunPulse/covb; ods output ParameterEstimates=gmparms ParmInfo=gmpinfo CovB=gmcovb; run;
2019年10月26日星期六
2019年9月16日星期一
Piecewise linear basis, tent function, B-spline in GAM
require(gamair)
data("engine")
engine
attach(engine)
plot(size,wear,xlab='Engine capacity',ylab='Wear index')
tf<-function(x,xj,j){
dj<-xj*0;
dj[j]<-1
approx(xj,dj,x)$y # approx return two subject, only return y here
}
tf.X<-function(x,xj){
##tent function basis matrix given data x
## and knot sequnce xj
nk<-length(xj);
n<-length(x)
X<-matrix(NA,n,nk)
for(j in 1:nk){
X[,j]<-tf(x,xj,j)
}
X
}
sj<-seq(min(size),max(size),length=6)
sj
X<-tf.X(size,sj)
X
b<-lm(wear~X-1)
b
s<-seq(min(size),max(size),length=200)
Xp<-tf.X(s,sj)
plot(size,wear)
lines(s,Xp%*%coef(b))
Xp
library(splines)
B=bs(size,knots=sj,Boundary.knots=c(1.420,2.980),degree=1)
B
b2<-lm(wear~B-1)
b2
s2<-seq(min(size),max(size),length=200)
s2
Xp2<-bs(s2,knots=sj,Boundary.knots=c(1.420,2.980),degre=1)
Xp2[,1:6]
plot(size,wear)
lines(s2,Xp2[,1:6]%*%coef(b))
B=bs(s,knots=sj,Boundary.knots=c(1.420,2.980),degree =1)
matplot(s,B,type="l")
data("engine")
engine
attach(engine)
plot(size,wear,xlab='Engine capacity',ylab='Wear index')
tf<-function(x,xj,j){
dj<-xj*0;
dj[j]<-1
approx(xj,dj,x)$y # approx return two subject, only return y here
}
tf.X<-function(x,xj){
##tent function basis matrix given data x
## and knot sequnce xj
nk<-length(xj);
n<-length(x)
X<-matrix(NA,n,nk)
for(j in 1:nk){
X[,j]<-tf(x,xj,j)
}
X
}
sj<-seq(min(size),max(size),length=6)
sj
X<-tf.X(size,sj)
X
b<-lm(wear~X-1)
b
s<-seq(min(size),max(size),length=200)
Xp<-tf.X(s,sj)
plot(size,wear)
lines(s,Xp%*%coef(b))
Xp
library(splines)
B=bs(size,knots=sj,Boundary.knots=c(1.420,2.980),degree=1)
B
b2<-lm(wear~B-1)
b2
s2<-seq(min(size),max(size),length=200)
s2
Xp2<-bs(s2,knots=sj,Boundary.knots=c(1.420,2.980),degre=1)
Xp2[,1:6]
plot(size,wear)
lines(s2,Xp2[,1:6]%*%coef(b))
B=bs(s,knots=sj,Boundary.knots=c(1.420,2.980),degree =1)
matplot(s,B,type="l")
2019年9月15日星期日
2019年9月11日星期三
A function for Gauss-Seidel Iterative methods
Gauss-Seidel Iterative method is essentially the back fitting method for GAM model.
Such as to solve equations
5x1- 2x2+3x3=-1
-3x1+9x2+ x3= 2
2x1 - x2 - 7x3= 3
library(MASS) # package needed for ginv() function
Gauss_Seidel<-function(A,b,x0){
B<-t(A)%*%A
C<-t(A)%*%b
iter<-100 # number of Gauss-Seidel iterations to run,100 iterations
L<-lower.tri(B)*B# lower triang. B,lower.tri() Returns a matrix of logicals,therefore need to *B
U<-upper.tri(B)*B # upper triang. B
D<-diag(diag(B)) # diag of A
n<-1
while(n<=iter){
print(x0)
# Gauss-Seidel formula
x1<-(ginv((L+D)))%*%((-U%*%x0)+C)
x0<-x1
n=n+1
}
return(x0)
}
A<-matrix(c(5,-3,2,-2,9,-1,3,1,-7),ncol=3,nrow=3)
b<-matrix(c(-1,2,3),ncol=1,nrow=3)
x0<-t(t(c(0,0,0)))
Gauss_Seidel(A,b,x0)
After 10 iterations:
> Gauss_Seidel(A,b,x0)
[,1]
[1,] 0
[2,] 0
[3,] 0
[,1]
[1,] -0.1315789
[2,] 0.1380049
[3,] -0.4007323
[,1]
[1,] -0.01103352
[2,] 0.23926763
[3,] -0.41380921
[,1]
[1,] 0.09220576
[2,] 0.28760601
[3,] -0.41850252
[,1]
[1,] 0.1415692
[2,] 0.3105375
[3,] -0.4207159
[,1]
[1,] 0.1649876
[2,] 0.3214148
[3,] -0.4217656
[,1]
[1,] 0.1760960
[2,] 0.3265744
[3,] -0.4222636
[,1]
[1,] 0.1813651
[2,] 0.3290218
[3,] -0.4224998
[,1]
[1,] 0.1838645
[2,] 0.3301827
[3,] -0.4226118
[,1]
[1,] 0.1850501
[2,] 0.3307334
[3,] -0.4226650
[,1]
[1,] 0.1856124
[2,] 0.3309946
[3,] -0.4226902
#Check with solve function.
solve(A,b)
Such as to solve equations
5x1- 2x2+3x3=-1
-3x1+9x2+ x3= 2
2x1 - x2 - 7x3= 3
library(MASS) # package needed for ginv() function
Gauss_Seidel<-function(A,b,x0){
B<-t(A)%*%A
C<-t(A)%*%b
iter<-100 # number of Gauss-Seidel iterations to run,100 iterations
L<-lower.tri(B)*B# lower triang. B,lower.tri() Returns a matrix of logicals,therefore need to *B
U<-upper.tri(B)*B # upper triang. B
D<-diag(diag(B)) # diag of A
n<-1
while(n<=iter){
print(x0)
# Gauss-Seidel formula
x1<-(ginv((L+D)))%*%((-U%*%x0)+C)
x0<-x1
n=n+1
}
return(x0)
}
A<-matrix(c(5,-3,2,-2,9,-1,3,1,-7),ncol=3,nrow=3)
b<-matrix(c(-1,2,3),ncol=1,nrow=3)
x0<-t(t(c(0,0,0)))
Gauss_Seidel(A,b,x0)
After 10 iterations:
> Gauss_Seidel(A,b,x0)
[,1]
[1,] 0
[2,] 0
[3,] 0
[,1]
[1,] -0.1315789
[2,] 0.1380049
[3,] -0.4007323
[,1]
[1,] -0.01103352
[2,] 0.23926763
[3,] -0.41380921
[,1]
[1,] 0.09220576
[2,] 0.28760601
[3,] -0.41850252
[,1]
[1,] 0.1415692
[2,] 0.3105375
[3,] -0.4207159
[,1]
[1,] 0.1649876
[2,] 0.3214148
[3,] -0.4217656
[,1]
[1,] 0.1760960
[2,] 0.3265744
[3,] -0.4222636
[,1]
[1,] 0.1813651
[2,] 0.3290218
[3,] -0.4224998
[,1]
[1,] 0.1838645
[2,] 0.3301827
[3,] -0.4226118
[,1]
[1,] 0.1850501
[2,] 0.3307334
[3,] -0.4226650
[,1]
[1,] 0.1856124
[2,] 0.3309946
[3,] -0.4226902
#Check with solve function.
solve(A,b)
2019年7月14日星期日
2019年7月6日星期六
2019年6月15日星期六
2019年4月7日星期日
2019年4月2日星期二
2019年3月29日星期五
NLPNRA (nonlinear programming – Newton Raphson Algorithm) call in IML
CALL NLPNRA( rc, xr, "fun", x0 <,opt, blc, tc, par, "ptit", "grd", "hes">);
rc: positive value indicate optimization succed, negative value indicate failure.
xr, returned parameters.
rc: positive value indicate optimization succed, negative value indicate failure.
xr, returned parameters.
2019年3月25日星期一
2019年3月20日星期三
2019年3月18日星期一
Hermite and Smith forms
http://www.math.ubc.ca/~cass/siegel/Smith.pdf
https://en.wikipedia.org/wiki/Hermite_normal_form
SAS
https://en.wikipedia.org/wiki/Hermite_normal_form
SAS
HERMITE Function
The HERMITE function uses elementary row operations to compute the Hermite normal form of a matrix. For square matrices this normal form is upper triangular and idempotent.
If the argument is square and nonsingular, the result is the identity matrix. In general the result satisfies the following four conditions (Graybill, 1969):
- It is upper triangular.
- It has only values of 0 and 1 on the diagonal.
- If a row has a 0 on the diagonal, then every element in that row is 0.
- If a row has a 1 on the diagonal, then every off-diagonal element is 0 in the column in which the 1 appears.
The following statements compute an example from Graybill (1969):
a = {3 6 9, 1 2 5, 2 4 10}; h = hermite(a); print h;
http://support.sas.com/documentation/cdl/en/imlug/65547/HTML/default/viewer.htm#imlug_langref_sect260.htm
If the argument is a square matrix, then the Hermite normal form can be transformed into the row-echelon form by rearranging rows in which all values are 0.
The trace of the Hermite matirx is the rank.
2019年3月6日星期三
Row vector is a function
A row vector is a function takes a column vector as input and produce a scalar
2019年3月5日星期二
Lagrange Multipliers
We can connect Lagrange to Ridge and Lasso Regression.
http://www.math.harvard.edu/archive/21a_spring_09/PDF/11-08-Lagrange-Multipliers.pdf
http://www.math.harvard.edu/archive/21a_spring_09/PDF/11-08-Lagrange-Multipliers.pdf
订阅:
博文 (Atom)