deseq2 lfc shrinkage with few DEGs
1
0
Entering edit mode
Victor ▴ 10
@73f9cb53
Last seen 2.7 years ago
United States

I would like to calculate shrunken log fold changes from RNA-seq data, and I recently ran in to the same case that was described in this old post by Mike Love: DESeq2 and shrinkage of log2 fold changes

One case where I would not use it, is if it is expected that nearly all genes will have no change and there is little to no variation across replicates (so near technical replication), and then say < 10 genes with very large fold changes. This scenario could occur in non-biological samples, for example technical replicates plus DE spike ins. The reason this would cause a problem is that the prior is formed according to a high percentile of the large fold changes, but it could miss if there were singular DE genes, and form a prior which is not wide enough to accommodate very large fold changes. It is trivial to turn off the prior in this case (betaPrior=FALSE).

In my case I actually do have biology where for some contrasts only 1 or 2 genes change significanctly, and the built-in shrinkage methods indeed sometimes overshrink these to something close to zero.

I have been using the very hacky shrunkLFC = LFC * (1-pval) which at least takes things in approximately the right direction (and maybe with sufficiently vigorous hand waving can be justified with a probabilistic interpretation?). But if anyone has more rigorous suggestions, I would love to hear about them.

Thanks!

RNASeq DESeq2 • 1.3k views
ADD COMMENT
1
Entering edit mode
@mikelove
Last seen 6 days ago
United States

Another solution would be to manually set a default prior width (not an adaptive prior) for a distribution with wide tails, e.g. Cauchy. This occurs in the bayesglm package, but you can also do this within the DESeq2-apeglm framework.

If you set:

res <- lfcShrink(dds, ..., apeAdapt=FALSE, prior.control=...)

Then for prior.control, you can see the man page ?apeglm e.g. described here

https://rdrr.io/bioc/apeglm/man/apeglm.html

idx <- ... # put the indices of the coefficients *not* to shrink, e.g. 1 for intercept, etc.
prior.control <- list(no.shrink=idx,
  prior.mean=0, prior.scale=1, prior.df=1,
  prior.no.shrink.mean=0, prior.no.shrink.scale=15
)
ADD COMMENT
0
Entering edit mode

This seems to work pretty well, thanks. I found that prior.scale=0.5 or prior_scale=0.25 gave reasonable-looking results, at least with the datasets I've tried so far.

ADD REPLY
0
Entering edit mode

I'm dealing with a similar situation. I assume the coefficients related to batch effect would need to be included in idx also, is this correct? Thanks for all your work.

Edit: Question answered by myself. I looked at the DESeq2 code and found that with default parameters only the coef of interest is shrunk, not batch effects. This might've been obvious to others..

For an approach that still calculates prior based on mle, one can pass through multiplier, which multiplies the resulting prior.scale. Obviously this cannot be recommended without having a very good reason to deviate from the published protocol.

ADD REPLY

Login before adding your answer.

Traffic: 722 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6