Filtering normalised counts
1
0
Entering edit mode
Rob • 0
@9a3af295
Last seen 18 months ago
United Kingdom

Hello

I am trying to filter out counts of 0.000 from my normalised counts data frame so that I can run a power calculation from it using the RnaSeqSampleSize library.

I have tried to filter using dplyr (the library is active) but this doesnt seem to work.

I ideally would like 'normalized_counts' (the final data frame) to have the samples with normalised counts that are above 0.0. so not values of 0.0

Thanks for anyone's help. I've gone through a few tutorials to work this out to no avail. If I glimpse 'normalized_counts', the values in the columns are dbl

normalised_counts <- counts(dds, normalized = TRUE)


normalized_counts <- counts(dds, normalized = TRUE) %>%
                     data.frame(check.names = FALSE) %>%
                     rownames_to_column(var="ensembl")

normalized_counts$symbol <- mapIds(org.Mm.eg.db, 
                            keys = rownames(res), 
                            column="SYMBOL",
                            keytype="ENSEMBL",
                            multiVals="first")

normalized_counts %>% filter(if_any(starts_with("ZER", "TWO", "THI", "SEV"), ~ . > 0.0))
DESeq2 • 764 views
ADD COMMENT
0
Entering edit mode
@james-w-macdonald-5106
Last seen 4 days ago
United States

Orthogonal to your question, but it doesn't appear that you need to filter your counts, so it is not clear why you are doing that. Maybe just use the data as is? There are arguments for the minAveCount, which seem to fix whatever problem you are trying to fix.

ADD COMMENT

Login before adding your answer.

Traffic: 380 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6