2.5 years ago by

Cambridge, United Kingdom

If your other data sets are not high-throughput, what's the harm in running the `t.test`

function over a couple of loops? It probably wouldn't take too long for a small number of features. Besides, empirical Bayes is more versatile than you might think. Even though it's used in *limma* to share information across genes, it can also be used generally, e.g., across genomic windows, TF binding sites or baseball players. So, you might consider whether you could just use the moderated t-test for your data, even if that data is lower-throughput.

With all that said, if you want to compute unmoderated t-statistics, I guess you'd do something like this:

# where 'fit' is the output from lmFit() or contrasts.fit().
unmod.t <- fit$coefficients/fit$stdev.unscaled/fit$sigma
pval <- 2*pt(-abs(unmod.t), fit$df.residual)

However, you better have a large sample size, otherwise your power to reject the null hypothesis will be low.

•

link
modified 2.5 years ago
•
written
2.5 years ago by
Aaron Lun • **21k**