Hi everybody, I’m quite new at working with both R and WGCNA, so I apologize for any silly beginner mistake that I may have done or will ask about. I have tried the script in the tutorial for the given mouse data and it works perfectly. I tried then to adapt the script to my own data, 116 expression arrays (HG133 plus from Affymetrix). Each chip contains more than 54000 probes. I have been using a 20GB RAM. I tried working with the one step network construction, increasing the number of probes up to 20000. The output file is splat in multiple TOMblock files. Everything seems to work fine, but when I want to use this:
dissTOM = 1-TOMsimilarityFromExpr(datExpr, power = 6)
I get an error message:
Error: cannot allocate vector of size 22.3 Gb
In addition: Warning messages:
1: In TOMsimilarityFromExpr(datExpr, power = 6) :
Reached total allocation of 20424Mb: see help(memory.size)
2: In TOMsimilarityFromExpr(datExpr, power = 6) :
Reached total allocation of 20424Mb: see help(memory.size)
3: In TOMsimilarityFromExpr(datExpr, power = 6) :
Reached total allocation of 20424Mb: see help(memory.size)
4: In TOMsimilarityFromExpr(datExpr, power = 6) :
Reached total allocation of 20424Mb: see help(memory.size)
I guess this is because of the multiple TOM files. I read in the tutorial that the code can be adapted to perform the visualization for each block separately. Unfortunately I didn’t manage, up to now.
Does anybody have suggestions on how to do this? Or any further suggestion?
Thank you,
Angela
Thank you very much for your help. Indeed it works quite ok when downsizing to 20k probes. I will now do a proper filtering as you suggest. Best regards. Angela