Question: Change lower bound 'optim' function in DESeq2::unmix() to 0.001 to prevent error
0
3 months ago by
a.v.vliet0 wrote:

Hi,

I have more of a suggestion rather than a question. I used the unmix() function from the DEseq2 package on my data and kept getting the same error

Error in optim(par = rep(1, ncol(pure)), fn = sumLossVST, gr = NULL,  :
non-finite finite-difference value


A search for the error led me to a PDF where a solution is described, see page 7. I retrieved the unmix() code from GitHub and changed line 98 to

method="L-BFGS-B", lower=c(.0001,.0001),upper=c(Inf,Inf))\$par


The function now worked for me. Of course now all values that would be zero are 0.001 or slightly higher but otherwise the output I got makes sense. Just wanted to report this because I'm not a statistician and don't know if this could have changed my output in any significant way. If not this might be helpful to people encountering the same problem.

deseq2 unmix • 105 views
modified 3 months ago by Michael Love22k • written 3 months ago by a.v.vliet0
Answer: Change lower bound 'optim' function in DESeq2::unmix() to 0.001 to prevent error
2
3 months ago by
Michael Love22k
United States
Michael Love22k wrote:

Hi,

Thanks for the post. I’ve noticed that the optimization works stably and without error if the “pure” reference types are not too highly correlated. Anyway this is a good thing to check because it will produce non identifiable solutions.

That’s just another word of advice, but anyone is of course welcome to modify the code as needed. unmix() was added to DESeq2 in 2017 because some bioinformatic groups here at UNC wanted to do deconvolution in combination with our VST implementation. It’s a straightforward use of non-negative regression with comparison on the VST scaled data.