I am running DESeq2 package version : 1.30.0 I am doing a LRT on a data with 16K genes with 300 samples
On my Mac laptop the processing is fast. But the same code, same dataset, same package version of DESeq2 runs slow on a Linux server with 4 cores. The "mean-dispersion relationship" is ~20x slower.
Also I tried the PARALLEL=TRUE, it does make things faster but the server is still 10x slower. (I did note that the default number of worker threads on the server was 135, which seemed rather high).
Any advice on how I can debug this issue and make my server performance improve.

Also, maybe the anomaly is that my laptop is not generating the correct results. Again it is the same code and same dataset but for 16K genes, my laptop produces results in a few minutes, which may indicate it is really not doing the full processing?
Same data, same version of software should produce identical results -- i'd look for a bug/discrepancy.
I reviewed the code and couldn't find any differences that account for the laptop vs server. Today I did a diff of the outputs results from the laptop and the server, and they are identical. So I think it is just an issue of performance.
I contacted my IT and they asked me to check back about the optimal Rcpp configurations - is there anything you can recommend? I looked at the documentation but didn't find anything on this topic.
(They did mention that our servers are using the new AMD processors, not sure if this makes any difference)
Rcpp and DESeq2 are the same package version on server and laptop. But I noticed on the server RcppArmadillo is ‘0.10.1.2.0’ but on my laptop it is ‘0.10.1.0.0’.
Does DESeq2 rely on RcppArmadillo? And could that be a potential reason for performance drops?
Thanks