When using the Quasi-Likelihood dispersions via glmQLFit in
edgeR, and providing a
DGEList as input, the function
glmQLFit.DGEList() explicitly passes the value
lib.size = NULL to the lower-level function
glmQLFit(). This forces the library sizes to be the column sums when passed eventually to the lower-level function
.compressOffsets(). I'm really struggling to see the justification for this.
My situation is that I'm using sliding windows via the workflow presented at https://f1000research.com/articles/4-1080/v2 and as such, the library size is not the sum of the reads in each column. I would like to use the correct library sizes I have assigned to the
lib.size element of my
DGEList. Clearly, I can just write my own function to get around this. However, is this an intentional decision with solid reasoning that's beyond me, or just one that slipped through?