Bug in maNormNN
0
0
Entering edit mode
Tarca, Adi ▴ 570
@tarca-adi-1500
Last seen 6 months ago
United States
Hi Elizabeth, Thanks for sending me the data, so I was able to see where the problem was. The maNormNN function uses the nnet function to fit a spatial and intensity based normalization model. Sometimes nnet function converges without finding a proper solution. Although maNormNN was designed to deal with such rare situations by restarting again the neural network training, with your particular data set that was not sufficient to ensure finding a good solution. Now I have fixed this problem by increasing the number of attempts to find a solution before giving up, and in the very improbable situation that such convergence problem occurs an explicit warning message will be given. This fix should appear soon in the devel version of the nnNorm library on the bioConductor website, as I have already committed the change in the package. Thanks again, Laurentiu -----Original Message----- From: Elizabeth Thomas [mailto:ethomas@CGR.Harvard.edu] Sent: Friday, November 11, 2005 2:05 PM To: 'ltarca at rsvs.ulaval.ca' Subject: Bug in maNormNN Hello, I need to a spatial normalization of some microarray data for a project that I'm working on, and after some research on different methods, I decided to use two methods which have been implemented in R and are available as part of the Bioconductor library, namely the maNorm scaled print-tip loess method, and your neural network method. I am seeing some odd artifacts in the output from the maNormNN, and I was wondering if you've seen this before. I wrote a little visualization program for mapping the M log ratio expression values onto the array layouts, and it demonstrates the problem. Here's an example where everything is working as it should: Before normalization http://eli- nati.fletzet.com/eli/ArrayLayouts/Normalization/Gasch01y12/ga sch0 1y12_noNorm.8525.xls.png Print tip intensity dependent (geonorm1) http://eli- nati.fletzet.com/eli/ArrayLayouts/Normalization/Gasch01y12/ga sch0 1y12_geoNorm1.8525.xls.png Neural Network intensity dependent (geonorm2) http://eli- nati.fletzet.com/eli/ArrayLayouts/Normalization/Gasch01y12/ga sch0 1y12_geoNorm2.8525.xls.png And here's an example where something has gone wrong with geonorm2 (maNormNN): Before normalization http://eli- nati.fletzet.com/eli/ArrayLayouts/Normalization/Gasch01y12/ga sch0 1y12_noNorm.1010.xls.png Print tip intensity dependent (geonorm1) http://eli- nati.fletzet.com/eli/ArrayLayouts/Normalization/Gasch01y12/ga sch0 1y12_geoNorm1.1010.xls.png Neural Network intensity dependent (geonorm2) http://eli- nati.fletzet.com/eli/ArrayLayouts/Normalization/Gasch01y12/ga sch0 1y12_geoNorm2.1010.xls.png The problem is reproduceable, and as you can see it doesn't affect the geonorm1 results, which go through almost identical processing (maNorm=s rather than maNormNN, but otherwise the same). Have you seen anything like this before? Any tips? I can send you the original data and exactly what I did. Generally, the maNormNN method works better than maNorm, particularly for data sets where the block size is large and most errors are within a block rather than across blocks. As advertised. However, if these quirky artifacts keep popping up, then I won't be able to trust maNormNN for my analyses. thanks, Elizabeth ---------------------------------------------------------------------- -- --------
Microarray Normalization GO Visualization Network nnNorm Microarray Normalization GO • 837 views
ADD COMMENT

Login before adding your answer.

Traffic: 272 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6