Hi,
it seems like with the new version of DESeq2 (Version 1.10.0) is always using the standard model matrix as default rather than the expanded model matrix for design with interaction. I tried to switch it back to using expanded model matrix by doing the following, but got an error. Please help.
my experiment has two factors (Type with 2 levels and Day with 3 levels)
experimental design I use is ~Type + Day + Type:Day
> dds <- DESeq(dds, modelMatrixType = "expanded")
estimating size factors
estimating dispersions
gene-wise dispersion estimates
mean-dispersion relationship
final dispersion estimates
fitting model and testing
Error in nbinomWaldTest(object, betaPrior = betaPrior, quiet = quiet, :
expanded model matrices require a beta prior
>
Hi Michael,
thanks for your fast response. I have taken a look at the thread and I must also agree that it is a pity to take away the extended model matrix. I find it very useful and convenient too.
I still have one question. With the design of of ~Type + Day + Type:Day and the extended model matrix, I can use contrast=c("Type", "mixed", "single") to call for the overall main effect of Type across all levels of Day. Now, with the standard model, what should I contrast or call in order to have the same comparison? I know that using the same contrast=c("Type", "mixed", "single") will only give me the main effect of Type for the reference level of Day with the standard model matrix. Do note that I hope to keep the experimental design identical if possible. Thank you.
hi,
You'll just have to trust me that the latest version is in my opinion the best implementation. The prior calculation for non-interaction terms was straightforward to implement. But the old version of dealing with interactions and priors made too many arbitrary algorithmic choices, and I'm happy to cut out that code for something simpler.
The way to test for an overall, or average, effect of type across all levels of day is to use the same design as in the link above. That would be here ~day + day:type. Then you will have a type effect for each day in the resultsNames. You can just average these to obtain the average type effect. This would look like results(dds, contrast=list(c("day1typeT","day2typeT","day3typeT")), listValues=1/3). You can contrast these terms to test if the type effect is different on day 3 vs day 2, etc.
Hi Michael,
just one more question. If I use a different design for each run, will I get different results with the same data set? I ask this question because I would really like to use ~type + day + type:day to get some outputs (basically the vsd transformed data) to run PERMANOVA. Eventually, I will have no significant different for interaction (type:day) and day. I will then just look at type by re-running ~type. Do you think this is appropriate?
Yes, different design will give different coefficients and test statistics, and these can be drastically different. You need to choose a design which makes sense for the experiment and go with that. You should use the same design for visualizing the dataset (VST, and here i would recommend blind=FALSE...see vignette for details) and for significance testing (here we recommend testing with DESeq() but it's ultimately up to you in the end what you do).