Every time I think I know what's going on, suddenly there's another layer of complications.
2017年6月30日星期五
2017年6月24日星期六
derivative of even and odd function
derivative of an even function is an odd function.
derivative of an odd function is an even function.
derivative of an odd function is an even function.
2017年6月23日星期五
2017年6月16日星期五
2017年6月13日星期二
2017年6月10日星期六
median of a series of numbers minus its median is always zero
1. suppose we only have 3 (odd) number of values x1,x2, x3 ordered already
so the median is x2.
The new sequence will be x1-x2, 0, x3-x2.
We see the new median is 0.
2. suppose we have 4 (even) number of values y1, y2, y3, y4, ordered already.
the median is (y2+y3)/2.
The new sequence is : y1-(y2+y3)/2, y2-(y2+y3)/2, y3-(y2+y3)/2, y4-(y2+y3)/2.
The new median is: (y2-(y2+y3)/2+ y3-(y2+y3)/2) which is also 0.
This observation is useful when we estimate the shift estimator of a Sign Scores test.
so the median is x2.
The new sequence will be x1-x2, 0, x3-x2.
We see the new median is 0.
2. suppose we have 4 (even) number of values y1, y2, y3, y4, ordered already.
the median is (y2+y3)/2.
The new sequence is : y1-(y2+y3)/2, y2-(y2+y3)/2, y3-(y2+y3)/2, y4-(y2+y3)/2.
The new median is: (y2-(y2+y3)/2+ y3-(y2+y3)/2) which is also 0.
This observation is useful when we estimate the shift estimator of a Sign Scores test.
2017年6月8日星期四
Numerically solve estimating equations for shift estimator of the Van der Waerden test (normal scores)
#Data from HMC p557.
s1<-c(51.9,56.9,45.2,52.3,59.5,41.4,46.4,45.1,53.9,42.9,41.5,55.2,32.9,54.0,45.0)
s2<-c(59.2,49.1,54.4,47.0,55.9,34.9,62.2,41.6,59.3,32.7,72.1,43.8,56.8,76.7,60.3)
L<-vector("list",21)
L2<-vector("list",21)
L_rank<-vector("list",21) ##21 is from the steps 4 to 6 by 0.1
for(d in seq(from=4, to=6, by=0.1)){
L[[round((d-3.9)/0.1)]]<-(s2-d)
L2[[round((d-3.9)/0.1)]]<-c(s1,L[[round((d-3.9)/0.1)]])
L_rank[[round((d-3.9)/0.1)]]<-rank(c(s1,L[[round((d-3.9)/0.1)]]))
}
L_w<-vector("list",21)
L_ns<-vector("list",21)
L_sum_w<-vector("list",21)
for (i in 1:21){
for (j in 16:30){
L_w[[i]][j-15]<-L_rank[[i]][j]
L_ns[[i]][j-15]<-qnorm(L_w[[i]][j-15]/31,0,1)
L_sum_w[[i]]<-sum(L_ns[[i]])
}
}
####
[[1]]
[1] 0.6743001
[[2]]
[1] 0.3482201
[[3]]
[1] 0.2545677
[[4]]
[1] 0.2545677
[[5]]
[1] 0.2545677
[[6]]
[1] 0.21268
[[7]]
[1] 0.171218
[[8]]
[1] 0.171218
[[9]]
[1] 0.171218
[[10]]
[1] 0.1301031
[[11]]
[1] 0.08926116
[[12]]
[1] -0.0229044
[[13]]
[1] -0.06941334
[[14]]
[1] -0.1639976
[[15]]
[1] -0.4882569
[[16]]
[1] -0.5844623
[[17]]
[1] -0.6919131
[[18]]
[1] -0.7492664
[[19]]
[1] -0.7492664
[[20]]
[1] -0.7492664
[[21]]
[1] -0.7492664
We can see the 12th value =-0.0229044 is the closest value to zero, the corresponded delta is 5.1. (form the 4 to 6 by 0.1 step)
Therefore, 5.1 is the solution of the Estimating Equations.
s1<-c(51.9,56.9,45.2,52.3,59.5,41.4,46.4,45.1,53.9,42.9,41.5,55.2,32.9,54.0,45.0)
s2<-c(59.2,49.1,54.4,47.0,55.9,34.9,62.2,41.6,59.3,32.7,72.1,43.8,56.8,76.7,60.3)
L<-vector("list",21)
L2<-vector("list",21)
L_rank<-vector("list",21) ##21 is from the steps 4 to 6 by 0.1
for(d in seq(from=4, to=6, by=0.1)){
L[[round((d-3.9)/0.1)]]<-(s2-d)
L2[[round((d-3.9)/0.1)]]<-c(s1,L[[round((d-3.9)/0.1)]])
L_rank[[round((d-3.9)/0.1)]]<-rank(c(s1,L[[round((d-3.9)/0.1)]]))
}
L_w<-vector("list",21)
L_ns<-vector("list",21)
L_sum_w<-vector("list",21)
for (i in 1:21){
for (j in 16:30){
L_w[[i]][j-15]<-L_rank[[i]][j]
L_ns[[i]][j-15]<-qnorm(L_w[[i]][j-15]/31,0,1)
L_sum_w[[i]]<-sum(L_ns[[i]])
}
}
####
[[1]]
[1] 0.6743001
[[2]]
[1] 0.3482201
[[3]]
[1] 0.2545677
[[4]]
[1] 0.2545677
[[5]]
[1] 0.2545677
[[6]]
[1] 0.21268
[[7]]
[1] 0.171218
[[8]]
[1] 0.171218
[[9]]
[1] 0.171218
[[10]]
[1] 0.1301031
[[11]]
[1] 0.08926116
[[12]]
[1] -0.0229044
[[13]]
[1] -0.06941334
[[14]]
[1] -0.1639976
[[15]]
[1] -0.4882569
[[16]]
[1] -0.5844623
[[17]]
[1] -0.6919131
[[18]]
[1] -0.7492664
[[19]]
[1] -0.7492664
[[20]]
[1] -0.7492664
[[21]]
[1] -0.7492664
We can see the 12th value =-0.0229044 is the closest value to zero, the corresponded delta is 5.1. (form the 4 to 6 by 0.1 step)
Therefore, 5.1 is the solution of the Estimating Equations.
2017年6月6日星期二
Difference between logit and probit models
https://stats.stackexchange.com/a/30909/61705
Family | Default Link Function |
binomial | (link = "logit") |
gaussian | (link = "identity") |
Gamma | (link = "inverse") |
inverse.gaussian | (link = "1/mu^2") |
poisson | (link = "log") |
quasi | (link = "identity", variance = "constant") |
quasibinomial | (link = "logit") |
quasipoisson | (link = "log") |
2017年6月4日星期日
Normal score and rankit
The second meaning of normal score is associated with data values derived from the ranks of the observations within the dataset. A given data point is assigned a value which is either exactly, or an approximation, to the expectation of the order statistic of the same rank in a sample of standard normal random variables of the same size as the observed data set.[1] Thus the meaning of a normal score of this type is essentially the same as a rankit, although the term "rankit" is becoming obsolete. In this case the transformation creates a set of values which is matched in a certain way to what would be expected had the original set of data values arisen from a normal distribution.
订阅:
博文 (Atom)