Filtering normalised counts
1
0
Entering edit mode
Rob • 0
@9a3af295
Last seen 10 days ago
United Kingdom

Hello

I am trying to filter out counts of 0.000 from my normalised counts data frame so that I can run a power calculation from it using the RnaSeqSampleSize library.

I have tried to filter using dplyr (the library is active) but this doesnt seem to work.

I ideally would like 'normalized_counts' (the final data frame) to have the samples with normalised counts that are above 0.0. so not values of 0.0

Thanks for anyone's help. I've gone through a few tutorials to work this out to no avail. If I glimpse 'normalized_counts', the values in the columns are dbl

normalised_counts <- counts(dds, normalized = TRUE)

normalized_counts <- counts(dds, normalized = TRUE) %>%
data.frame(check.names = FALSE) %>%
rownames_to_column(var="ensembl")

normalized_counts\$symbol <- mapIds(org.Mm.eg.db,
keys = rownames(res),
column="SYMBOL",
keytype="ENSEMBL",
multiVals="first")

normalized_counts %>% filter(if_any(starts_with("ZER", "TWO", "THI", "SEV"), ~ . > 0.0))

DESeq2 • 115 views
0
Entering edit mode
0
Entering edit mode
@james-w-macdonald-5106
Last seen 18 hours ago
United States

Orthogonal to your question, but it doesn't appear that you need to filter your counts, so it is not clear why you are doing that. Maybe just use the data as is? There are arguments for the minAveCount, which seem to fix whatever problem you are trying to fix.