Dear Bioconductor Community,
i would like to ask an "introductory" but important question regarding the appropriate interpretation/explanation of background correction for Illumina Bead Arrays. Recently, i have acquired an Illumina Human HT-12 v4 beadchip microarray dataset. My main question is mostly general but in conjuction to limma normalization function for illumina microarrays:
In detail, as my "sample probe profile" txt file which includes the probe summary file, from which a very small subset for the first sample looks like this:
PROBE_ID SYMBOL 1A.AVG_Signal 1A.Detection.Pval 1A.BEAD_STDERR 1A.Avg_NBEADS ILMN_1762337 7A5 113.6922 0.46233770 3.659430 23 ILMN_2055271 A 1BG 162.7333 0.02077922 6.885669 23 ILMN_1736007 A 1BG 114.4497 0.44545450 3.599054 31 ILMN_2383229 A 1CF 122.7193 0.25974030 8.953144 15 ILMN_1806310 A 1CF 134.9052 0.14285710 8.729850 18 ILMN_1779670 A 1CF 130.3925 0.18441560 8.197718 19
Even though i had pre-processed in the past illumina microarrays, my question is the following:
from the above file i can see(and after importing with read.ilmn) that these are raw summarized intensities. However, can i also assume that also are not background substracted ? In other words, the default output from BeadStudio/GenomeStudio does perform some kind of background correction ? Or im mistaken and only is an option(as i dont see any negative values in the above file )?
Finally, even if is the case , this should not be a consern as neqc includes a default offset ??
Please excuse me for my naive(even "silly") question, but because i have aqcuired the above file with also two more files(phenotype info and controls info) without any other information, this matter troubles me !!
Any suggestion would be essential !!