Question: Diffbind counts vs. MACS2 scores
0
gravatar for mm2489
4.6 years ago by
mm24890
United States
mm24890 wrote:

Hi Rory,
 

I have a question I was hoping you could clarify for me.

So in my chip-seq experiment, I have 2 conditions (2 biological replicates for each). Whenever I visualize the peaks in IGV or plot the MACS2 output bed scores in R, I see that overall, all counts are much lower in one condition compared to the other.

However, whenever I compare counts from the consensus peakset generated by DiffBind, the dramatic difference is gone.

I don't exactly understand the source of discrepancy between IGV peak intensities and DiffBind counts.

I would really appreciate your help
 

diffbind • 1.8k views
ADD COMMENTlink modified 4.6 years ago • written 4.6 years ago by mm24890
Answer: Diffbind counts vs. MACS2 scores
1
gravatar for Rory Stark
4.6 years ago by
Rory Stark3.0k
CRUK, Cambridge, UK
Rory Stark3.0k wrote:

The answer could have to do with how the data have been normalized.

There are a number of ways you can verify that the counts are working the way you expect, and what the normalization is doing.

To see the raw read counts, instead of normalized scores, you can set the score to DBA_SCORE_READS. You can switch between scores without having to recount:

> DBA <- dba.count(DBA, peaks=NULL, score=DBA_SCORE_READS)

Then when you retrieve the binding matrix with dba.peakset() you'll have raw reads, and you can see if these agree with what you see in IGV.

If you've done an analysis,even if it doesn't turn up any differentially bound sites, you can see even more useful stuff. For example,

> par(mfrow=c(1,2))
> dba,plotBox(DBA, contrast=1, bAll=TRUE, bDB=FALSE, bDBIncreased=FALSE, bDBDecreased=FALSE, bNormalized=FALSE)
> dba,plotBox(DBA, contrast=1, bAll=TRUE, bDB=FALSE, bDBIncreased=FALSE, bDBDecreased=FALSE, bNormalized=TRUE)

will give you boxplots of the counts. If you see a big difference in the first one, but they look even in the second one, that means the normalization is responsible.

I always look at MA plots, one normalized and one not, to see the effect of the normalization:

> par(mfrow=c(1,2))
> dba.plotMA(DBA, bNormalized=FALSE)
> dba.plotMA(DBA, bNormalized=TRUE)

If the most dense part of the first plot is above or below the line, but on the line in the second one, again, this is the normalization.

You can also retrieve raw and normalized scores int he report:

> reportRaw  <- dba.report(DBA, contrast=1, th=1, bNormalized=FALSE)
> reportNorm <- dba.report(DBA, contrast=1, th=1, bNormalized=TRUE)

If the issue is not normalization, let me know and we can trouble shoot. If it is, then you have to think carefully about whether the normalization is removing technical variance or if it is removing "real biological" variance. Most normalization methods assume to some degree that the bulk of the signal should be similar between the two groups. If you have a case where there is very little binding in one group, and a lot more binding in the other, this poses a challenge to normalizing. The TMM method that DiffBind uses by default (from edgeR) can handle this to a certain extent, but if it is too extreme wit will end up equalising things that though remain different. This is a problem for all these read-based sequencing experiments, including RNA-seq, I recommend reading the TMM paper by Robinson and Oshlack: http://genomebiology.com/2010/11/3/r25

Cheers-

Rory

Rory

ADD COMMENTlink written 4.6 years ago by Rory Stark3.0k

Thank you so much Rory!

So I ran the tests that you suggested and it looks like normalization is indeed responsible. I should mention that this chip-seq was done for a histone mark and we expect wide-spread loss of this mark in one of our experimental conditions.

My main concern at this point is that whenever I run DiffBind (using default parameters), edgeR give ~3400 differentially bound sites (almost half up and half down), whereas DEseq finds 0 and DEseq2 finds only 1. So which one is more appropriate in this case?

Also, the TMM-normalized counts don't seem to be very different across the two conditions. Is it more appropriate to run edgeR without TMM normalization in this case?

ADD REPLYlink written 4.6 years ago by mm24890
1

At this point, I'd ignore DESeq and just compare edgeR and DESeq2 (I'm thinking of deprecating DESeq support in favor of DESeq2). Comparing the MA plots may shed some insight as to how they may be being normalised differently, for example if the values seem shifted up or down in one relative to the other. You can also see how the two methods are normalising by calling dba.report() with bCounts=TRUE for each method and compare read scores for the same peaks.

Beyond that it is difficult to know what to do when the the biology may me contrary to the assumption being made by the normalisation. One thing to remember is that the confidence statistics being computed (p-value/FDR) can't be taken as the absolute truth. in some cases, they are most useful for ranking what regions you should be looking at. You can compare the top N peaks from each method and see if they are at least ranking the same region highly, and/or that the fold changes are similar.

Ultimately, you need to look at those regions (ie in the browser) and see if they make sense, and hopefully have some other method of validating the results. For example, if the enrichment is on genes, and the genes that are changing are in the pathways you'd expect.

DiffBind itself doesn't actually provide a way of running edgeR or DESeq2 analyses without normalizing. If you want to do that, you'd need to run those packages directly and uses some of the more advance options for controlling normalisation.

-Rory

ADD REPLYlink written 4.6 years ago by Rory Stark3.0k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 203 users visited in the last hour