Q value obtained from QValue package is lower than P value?
1
1
Entering edit mode
@tombernardryan-12173
Last seen 7.3 years ago

Hi all, I'm wondering if someone could put me on the right path to using the "qvalue" package correctly. I have an actual P value of 0.05.  I have a list of 1,000 randomised p values: range of these randomised p values is 0.002 to 0.795, average of the randomised p values is 0.399 and the median of the randomised p values is 0.45. 869 of the randomised p values are > 0.05.

I want to work out the false discovery rate (FDR) (Q; as described by Storey and Tibshriani in 2003) for my original p value, defined as the number of expected false positives over the number of significant results for my original P value. 

So, for my original P value, I want one Q value, that has been calculated as described above based on the 1,000 random p values.

I wrote this code:

pvals <- c(list_of_p_values_obtained_from_randomisations)

qobj <-qvalue(p=pvals)

r_output <- qobj$pi0

In this case, r_output (i.e. the Q value) is 0.0062. So I thought it would be reasonable to expect the FDR Q Value (i.e the number of expected false positives over the number of significant results) to be at least over 0.05, given that 869 of the randomised p values are > 0.05? . That's why I think qobj$pi0 isn't the right variable to be looking at? So my problem (or my mis-understanding) is that I have an actual P value of 0.05, but then a Q value that is lower, 0.006?

Could someone please tell me where I'm going wrong.

Thanks

Tom

qvalue pvalue r multiple comparisons • 2.7k views
ADD COMMENT
0
Entering edit mode
@storey-john-d-6576
Last seen 4.3 years ago
United States

It's a red flag that your largest p-value 0.795.  The assumption is that true null p-values are Uniform(0,1), so the largest p-value should be close to 1 when this assumption is met and there are a reasonable number of tests being performed.

From FAQ #1 (https://github.com/StoreyLab/qvalue/blob/master/README.md#frequently-asked-questions):

  1. This package produces "adjusted p-values", so how is it possible that my adjusted p-values are smaller than my original p-values?

The q-value is not an adjusted p-value, but rather a population quantity with an explicit definition. The package produces estimates of q-values and the local FDR, both of which are very different from p-values. The package does not perform a Bonferroni correction on p-values, which returns "adjusted p-values" that are larger than the original p-values. The maximum possible q-value is pi_0, the proportion of true null hypotheses. The maximum possible p-value is 1. When considering a large number of hypothesis tests where there is a nontrivial fraction of true alternative p-values, we will have both an estimate pi_0 < 1 and we will have some large p-values close to 1. Therefore, the maximal estimated q-value will be less than or equal to the estimated pi_0 but there will also be a number of p-values larger than the estimated pi_0. It must be the case then that at some point p-values become larger than estimated q-values.

 

ADD COMMENT

Login before adding your answer.

Traffic: 679 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6