Hi all,
I have been looking around for this in the already-asked questions, but I could not find it, so I am sorry, if this has been asked before.
I would like to use quantile normalization. What I am doing is looking at 2 datasets in which treated samples are compared with control samples. The commands I am using are as follows (using package voom):
initDGE <- DGEList(counts, groups)
quant <- voom(initDGE, design, normalize.method="quantile")
quantNorm <- quant$E
I know, that the 'quantNorm' object holds the normalized data.
But, does quantile normalization the way I do it here take into account the library size? So, does it take into account the library size to correct for that? I have a feeling, the way quantile normalization should work, it does not really matter whether it does. Is that correct?
Thank you for your help
Thanks, Gordon!
It was just, that with quantile normalization it is not so directly transparent from the normalized counts and library sizes how to arrive at the normalized counts.
It seems transparent to me. The voom help page says:
"voom performs the following specific calculations. First, the counts are converted to logCPM values, adding 0.5 to all the counts to avoid taking the logarithm of zero. The matrix of logCPM values is then optionally normalized."
It is clear from this that the library sizes have already been divided out (to form log-counts-per-million) before normalization is even done.