Entering edit mode
Simon Lin
▴
210
@simon-lin-461
Last seen 10.2 years ago
"Out of memory error" is a common challenge of running a very large
batch of
Affymetrix chips with RMA or gcRMA.
To solve this program, I am working on a dcExpresso function. It is in
alpha
stage now. I will move it into beta release in the next two weeks.
Best regards,
Simon
=================================================
Simon M. Lin, M.D.
Manager, Duke Bioinformatics Shared Resource
Assistant Research Professor, Biostatistics and Bioinformatics
Box 3958, Duke University Medical Center
Rm 286 Hanes, Trent Dr, Durham, NC 27710
Ph: (919) 681-9646 FAX: (919) 681-8028
Lin00025 (at) mc.duke.edu
http://dbsr.duke.edu
=================================================
Date: Mon, 7 Jun 2004 08:36:24 +0200 (MEST)
From: "R.G.W. Verhaak" <r.verhaak@erasmusmc.nl>
Subject: Re: [BioC] large amount of slides
To: bioconductor@stat.math.ethz.ch
Message-ID:
<43766.130.115.244.114.1086590184.squirrel@130.115.244.114>
Content-Type: text/plain;charset=iso-8859-1
I have succesfully ran GCRMA on a dataset of 285 HGU133a chips, on a
machine with 8 Gb RAM installed; I noticed a peak memory use of 5,5 Gb
(although I have not been monitoring it continuously). I would say 200
chips use equally less memory, so around 4 Gb.
Roel Verhaak
>
> Message: 9
> Date: Fri, 04 Jun 2004 10:06:14 -0500
> From: "Vada Wilcox" <v_wilcox@hotmail.com>
> Subject: [BioC] large amount of slides
> To: bioconductor@stat.math.ethz.ch
> Message-ID: <bay19-f34sdgaixwb9d0002ec89@hotmail.com>
> Content-Type: text/plain; format=flowed
>
> Dear all,
>
> I have been using RMA succesfully for a while now, but in the past I
have
> only used it on a small amount of slides. I would like to do my
study on a
> larger scale now, with data (series of experiments) from other
researchers
> as well. My questions is the following: if I want to study, let's
say 200
> slides, do I have to read them all into R at once (so together I
mean,
> with
> read.affy() in package affy), or is it OK to read them series by
series
> (so
> all wild types and controls of one researcher at a time)?
> If it is really necessary to read all of them in at one time how
much RAM
> would I need (for let's say 200 CELfiles) and how can I raise the
RAM? I
> now
> it's possible to raise it by using 'max vsize = ...' but I haven't
been
> able
> to do it succesfully for 200 experiments though. Can somebody help
me on
> this?
>