limma for Paired Samples vs. lme
2
0
Entering edit mode
wewolski ▴ 10
@wewolski-8499
Last seen 3.1 years ago
Zurich

Hi,

Just comparing the output of limma and lme. The model that I fit with lme is :

    le <- lme(lmIntensity ~ Condition, random = ~1|Donor, data = x) 

Since I have repeated measures (Condition) on the same Donor and I am not interested in the Donor effect I model it as random effect.

In limma paired samples, according to the limma userguide.pdf just need the following design matrix (see section 9.4.1):

designm <- with(des, model.matrix(~ Donor + Conditon))
fit2 <- lmFit(intMatrix, designm)
fit2.eb <- eBayes(fit2)
  

(Q1 : How would I specify in limma the following linear model lm(lmIntensity ~ Donor + Condition) ?)

Then to make the comparison between lme and limma I compute the ordinary p.values from the eBayes output by:

  coefsigma <- with(fit2.eb, sweep(coefficients,1, sigma, "/"))
  t.ord <- with(fit2.eb, coefsigma/ stdev.unscaled )

The effect sizes for the conditions are the same (intercepts are different, not sure why; anyone knows Q2 ? Since I am interested in differences between the conditions only thats OK for the moment). 

However, although many of the not moderated p-values (see for how I computed them above) are identical, some are quite different (see output below).

> head(CMPComp)
   row.names Effect.ConditonCMP p.ord.ConditonCMP      CMP.fc        CMP.p
1 A0A075B759         0.01522833      0.7775432739  0.01522833 0.7775432740
2 A0A096LP55        -0.36954684      0.0111800249 -0.36954684 0.0111800250
3 A0A0B4J2A2        -0.01566147      0.8673074288 -0.01566147 0.8673073828
4     A0AVT1        -0.27562322      0.0003654401 -0.27562322 0.0003654401
5     A0FGR8         0.22901692      0.0036717127  0.22901692 0.0036717088
6     A1L0T0         0.16303605      0.1694970420  0.16303605 0.1694970431

> with(CMPComp,head(CMPComp[(CMP.p - p.ord.ConditonCMP) < -0.05,]))
     row.names Effect.ConditonCMP p.ord.ConditonCMP     CMP.fc     CMP.p
320     O75688         -0.0577627         0.3186220 -0.0577627 0.2672571
1978    Q5T8P6          0.2838912         0.2477060  0.2838912 0.1966852
2912    Q9NRA8          0.1834104         0.2209531  0.1834104 0.1674343

 

Here CMP.p are the p-values from the lme model above for the condition CMP while  p.ord.ConditonCMP is for the same condition computed using limma.


Q3. What is the explanation for the differences? Usually, limma overestimates the p-values compared with lme.

limma linear model paired samples • 2.3k views
ADD COMMENT
4
Entering edit mode
Aaron Lun ★ 28k
@alun
Last seen 1 hour ago
The city by the bay

Q1. Well, you've already done it, so I'm not sure what the problem is.

Q2. This is probably due to differences in how the blocking factor is treated. In limma, the intercept represents the log-expression of the first condition in the first donor. In nlme, the random effect has a mean of zero, so the intercept just represents the log-expression of the first condition across all donors.

Q3. The most obvious reason for the difference is that you've asked limma to treat Donor as a fixed effect, which requires estimation of the coefficients for the blocking terms. In contrast, nlme treats Donor as a random effect, which avoids the need to estimate the blocking terms but involves estimation of the variance of the random effect. The differences in the p-values between the two methods reflect the differences in what terms are estimated and how uncertainty in estimation is accounted for. I wouldn't really worry about the exact magnitudes of the p-values, as long as the final decision (reject or accept the null hypothesis) is the same for most genes.

But why are you using random effects models in the first place? This is not necessary for your analysis - see A: How does edgeR compare to lme4? for a discussion of this topic (about edgeR, but the same arguments apply to limma). In particular, per-gene analyses with nlme do not benefit from information sharing between genes via empirical Bayes shrinkage in limma.

ADD COMMENT
1
Entering edit mode
@ryan-c-thompson-5618
Last seen 11 weeks ago
Icahn School of Medicine at Mount Sinai…

I'm not familiar with the model specification syntax of lme, but it looks like you're treating Donor as a random effect in lme and a fixed effect in limma. limma can fit a mixed effects model with one random effect term using the duplicateCorrelation function. I would expect this to give you p-values closer to lme.

designm <- with(des, model.matrix(~ Conditon))
dupcor <- duplicateCorrelation(intMatrix,designm,block=Donor)
fit2 <- lmFit(intMatrix, designm, block=Donor, correlation=dupcor$consensus)
fit2.eb <- eBayes(fit2)

Also, you don't need to compute p-values manually from the limma statistics. That's what the topTable function is for.

ADD COMMENT

Login before adding your answer.

Traffic: 452 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6