The QL framework provides more accurate type I error rate control, as it accounts for the uncertainty of the dispersion estimates. In contrast, the exact test assumes that the estimated dispersion is the true value, which can result in some inaccuracy. (The "exact" refers to the fact that the p-value is calculated exactly rather than relying on approximations; however, this only results in exact type I error control when the estimated and true dispersions are the same.) For this reason, I prefer using the QL methods whenever I apply edgeR.
The QL methods (and GLM methods) are also more flexible with respect to the experimental design. For example, if you got a second batch of samples, all you would need to do in a GLM framework would be to change the design matrix, while the exact test methods can't handle an extra blocking factor for the batch.
In summary, while both of the methods will work for your data set, the QL F-test is probably the better choice. There are some situations where the QL F-test doesn't work well - for example, if you don't have replicates, you'd have to supply a fixed dispersion, which defeats the whole point of modelling estimation uncertainty. Another situation is where the dispersions are very large and the counts are very small, whereby some of the approximations in the QL framework seem to fail. In such cases, I usually switch to the LRT rather than using the exact test, for the reasons of experimental flexibility that I mentioned above.
Thanks Aaron!
I am also facing the question which test I should use. I am working with miRNA data.
How small do the counts need to be to be considered very small?