Can anyone explain what is being used to compute different features within computeFeatures?
I get the naming convention being applied that is spelled out in ?computeFeatures. I don't understand the .0., .a. and .Ba. labels.
For example:
> library(EBImage) > y = readImage(system.file("images", "nuclei.tif", package="EBImage"))[,,1] > x = thresh(y, 10, 10, 0.05) > x = opening(x, makeBrush(5, shape='disc')) > x = bwlabel(x) > ft = computeFeatures(x, y, xname="nucleus") > colnames(ft) [1] "nucleus.0.m.cx" "nucleus.0.m.cy" [3] "nucleus.0.m.majoraxis" "nucleus.0.m.eccentricity" <snip> [11] "nucleus.0.s.radius.max" "nucleus.a.b.mean" [13] "nucleus.a.b.sd" "nucleus.a.b.mad" <snip> [51] "nucleus.Ba.b.mean" "nucleus.Ba.b.sd" [53] "nucleus.Ba.b.mad" "nucleus.Ba.b.q001" [55] "nucleus.Ba.b.q005" "nucleus.Ba.b.q05" <snip>
My guess is nucleus.0.* features use only the data from the binary masks contained in x. So nucleus.0.m.cy is the y-axis centroid computed using the binary data. There are also nucleus.a.m.cy and nucleus.Ba.m.cy but it is unclear how these computations are different (they are extremely correlated but not identical).
I also suppose the .a. and .Ba. use the intensity values in y but the details are vague. Features like nucleus.a.b.mean and nucleus.Ba.b.mean are similar (~.80 corr) but not the same. I assume that they estimate the mean y intensity of objects defined by the labels in x but the difference is unclear.
Is there any documentation on this?
Thanks,
Max
> sessionInfo() R Under development (unstable) (2014-08-23 r66461) Platform: x86_64-apple-darwin10.8.0 (64-bit) locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] EBImage_4.7.16 loaded via a namespace (and not attached): [1] abind_1.4-0 BiocGenerics_0.11.4 grid_3.2.0 [4] jpeg_0.1-8 lattice_0.20-29 locfit_1.5-9.1 [7] parallel_3.2.0 png_0.1-7 tiff_0.1-5 [10] tools_3.2.0