Hello,
I am trying to understand the limma diffSplice function. When calculating the gene expression diffSplice weights the exon expression values by 1/stddev.unscaled^2.
# Genewise betas
u2 <- 1/exon.stdev.unscaled^2
u2.rowsum <- rowsum(u2,geneid,reorder=FALSE)
gene.betabar <- rowsum(exon.coefficients*u2,geneid,reorder=FALSE) / u2.rowsum
Why does it do that? And why not use the scaled standart deviation 1/(sigma^2+stdev.unscaled^2)? I guess I have also trouble understanding to what the stdev.unscaled vaules actually correspond to.
Further diffSplice scales the moderated t-value by some leverage.
exon.1mleverage <- 1 - (u2 / u2.rowsum[g,,drop=FALSE])
exon.coefficients <- exon.coefficients / exon.1mleverage
exon.t <- exon.t / sqrt(exon.1mleverage)
What is the reason behind that? I could not find anything in the documentation or in the limma userguide. Any pointers to literature or other explenations would be greatly appreciated.
Thanks,
Stefan
Thank you for your answer. I understand how stdev.unscaled is calculated but I am wondering what the reason or the advantage is of using this value instead of the scaled version when weighting exons?
All the exons for the same gene are assumed to share the same sigma.