This is a modified version of the R package vignette Analyzing RNA-seq data with DESeq2 by Love, Anders, & Huber.
A basic task in the analysis of count data from RNA-seq is the detection of differentially expressed genes. The count data are presented as a table which reports, for each sample, the number of sequence fragments that have been assigned to each gene. Analogous data also arise for other assay types, including comparative ChIP-Seq, HiC, shRNA screening, and mass spectrometry. An important analysis question is the quantification and statistical inference of systematic changes between conditions, as compared to within-condition variability. The package DESeq2 provides methods to test for differential expression by use of negative binomial generalized linear models; the estimates of dispersion and logarithmic fold changes incorporate data-driven prior distributions. This vignette explains the use of the package and demonstrates typical workflows. An RNA-seq workflow on the Bioconductor website covers similar material to this vignette but at a slower pace, including the generation of count matrices from FASTQ files.
Note: if you use DESeq2 in published research, please cite:
Love, M.I., Huber, W., Anders, S. (2014) Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biology, 15:550. 10.1186/s13059-014-0550-8
Other Bioconductor packages with similar aims are edgeR, limma, DSS, EBSeq, and baySeq.
As input, the DESeq2 package expects count data as obtained, e.g., from RNA-seq or another high-throughput sequencing experiment, in the form of a matrix of integer values. The value in the i-th row and the j-th column of the matrix tells how many reads can be assigned to gene i in sample j. Analogously, for other types of assays, the rows of the matrix might correspond e.g. to binding regions (with ChIP-Seq) or peptide sequences (with quantitative mass spectrometry). We will list method for obtaining count matrices in sections below.
The values in the matrix should be un-normalized counts or estimated counts of sequencing reads (for single-end RNA-seq) or fragments (for paired-end RNA-seq). The RNA-seq workflow describes multiple techniques for preparing such count matrices. It is important to provide count matrices as input for DESeq2’s statistical model to hold, as only the count values allow assessing the measurement precision correctly. The DESeq2 model internally corrects for library size, so transformed or normalized values such as counts scaled by library size should not be used as input.
The object class used by the DESeq2 package to store the read counts
and the intermediate estimated quantities during statistical analysis is
the DESeqDataSet, which will usually be represented in the code
here as an object dds
.
A technical detail is that the DESeqDataSet class extends the RangedSummarizedExperiment class of the SummarizedExperiment package. The “Ranged” part refers to the fact that the rows of the assay data (here, the counts) can be associated with genomic ranges (the exons of genes). This association facilitates downstream exploration of results, making use of other Bioconductor packages’ range-based functionality (e.g. find the closest ChIP-seq peaks to the differentially expressed genes).
A DESeqDataSet object must have an associated design formula. The design formula expresses the variables which will be used in modeling. The formula should be a tilde (~) followed by the variables with plus signs between them (it will be coerced into an formula if it is not already). The design can be changed later, however then all differential analysis steps should be repeated, as the design formula is used to estimate the dispersions and to estimate the log2 fold changes of the model.
Note: In order to benefit from the default settings of the package, you should put the variable of interest at the end of the formula and make sure the control level is the first level.
There are four ways of constructing a DESeqDataSet, depending on what pipeline was used upstream of DESeq2 to generated counts or estimated counts:
For this demo, we will show how to generate a DeSeqDataSet from a count matrix. See the original vignette for more information on the other ways to create a dataset for DESEq2.
The function DESeqDataSetFromMatrix can be used if you already have a matrix of read counts prepared from another source. To use DESeqDataSetFromMatrix, the user should provide the counts matrix, the information about the samples (the columns of the count matrix) as a DataFrame or data.frame, and the design formula.
To demonstrate the use of DESeqDataSetFromMatrix, we will
read in count data from the pasilla package. You
may either import them from the local folder data
if you
are on the demo server, or from the Web location where I have posted the
data if you are running this on your local machine. We read in a count
matrix, which we will name cts
, and the sample information
table, which we will name coldata
.
library(DESeq2)
cts <- as.matrix(read.csv('data/pasilla_gene_counts.tsv',sep="\t",row.names="gene_id"))
coldata <- read.csv('data/pasilla_sample_annotation.csv', row.names=1)
cts <- as.matrix(read.csv('https://usda-ree-ars.github.io/SEAStatsData/omics/pasilla_gene_counts.tsv',sep="\t",row.names="gene_id"))
coldata <- read.csv('https://usda-ree-ars.github.io/SEAStatsData/omics/pasilla_sample_annotation.csv', row.names=1)
coldata <- coldata[,c("condition","type")]
coldata$condition <- factor(coldata$condition)
coldata$type <- factor(coldata$type)
We examine the count matrix and column data to see if they are consistent in terms of sample order.
head(cts,2)
## untreated1 untreated2 untreated3 untreated4 treated1 treated2
## FBgn0000003 0 0 0 0 0 0
## FBgn0000008 92 161 76 70 140 88
## treated3
## FBgn0000003 1
## FBgn0000008 70
coldata
## condition type
## treated1fb treated single-read
## treated2fb treated paired-end
## treated3fb treated paired-end
## untreated1fb untreated single-read
## untreated2fb untreated single-read
## untreated3fb untreated paired-end
## untreated4fb untreated paired-end
Note that these are not in the same order with respect to samples!
It is absolutely critical that the columns of the count matrix and the rows of the column data (information about samples) are in the same order. DESeq2 will not make guesses as to which column of the count matrix belongs to which row of the column data, these must be provided to DESeq2 already in consistent order.
As they are not in the correct order as given, we need to re-arrange
one or the other so that they are consistent in terms of sample order
(if we do not, later functions would produce an error). We additionally
need to chop off the "fb"
of the row names of
coldata
, so the naming is consistent.
rownames(coldata) <- sub("fb", "", rownames(coldata))
all(rownames(coldata) %in% colnames(cts))
## [1] TRUE
all(rownames(coldata) == colnames(cts))
## [1] FALSE
cts <- cts[, rownames(coldata)]
all(rownames(coldata) == colnames(cts))
## [1] TRUE
With the count matrix, cts
, and the sample information,
coldata
, we can construct a DESeqDataSet:
dds <- DESeqDataSetFromMatrix(countData = cts,
colData = coldata,
design = ~ condition)
dds
## class: DESeqDataSet
## dim: 14599 7
## metadata(1): version
## assays(1): counts
## rownames(14599): FBgn0000003 FBgn0000008 ... FBgn0261574 FBgn0261575
## rowData names(0):
## colnames(7): treated1 treated2 ... untreated3 untreated4
## colData names(2): condition type
While it is not necessary to pre-filter low count genes before
running the DESeq2 functions, there are two reasons which make
pre-filtering useful: by removing rows in which there are very few
reads, we reduce the memory size of the dds
data object,
and we increase the speed of count modeling within DESeq2. It can also
improve visualizations, as features with no information for differential
expression are not plotted in dispersion plots or MA-plots.
Here we perform pre-filtering to keep only rows that have a count of
at least 10 for a minimal number of samples. The count of 10 is a
reasonable choice for bulk RNA-seq. A recommendation for the minimal
number of samples is to specify the smallest group size, e.g. here there
are 3 treated samples. If there are not discrete groups, one can use the
minimal number of samples where non-zero counts would be considered
interesting. One can also omit this step entirely and just rely on the
independent filtering procedures available in results()
,
either IHW or genefilter. See independent filtering
section in the complete vignette.
smallestGroupSize <- 3
keep <- rowSums(counts(dds) >= 10) >= smallestGroupSize
dds <- dds[keep,]
By default, R will choose a reference level for factors
based on alphabetical order. Then, if you never tell the DESeq2
functions which level you want to compare against (e.g. which level
represents the control group), the comparisons will be based on the
alphabetical order of the levels. There are two solutions: you can
either explicitly tell results which comparison to make using
the contrast
argument (this will be shown later), or you
can explicitly set the factors levels. In order to see the change of
reference levels reflected in the results names, you need to either run
DESeq
or nbinomWaldTest
/nbinomLRT
after the re-leveling operation. Setting the factor levels can be done
in two ways, either using factor:
dds$condition <- factor(dds$condition, levels = c("untreated","treated"))
…or using relevel, just specifying the reference level:
dds$condition <- relevel(dds$condition, ref = "untreated")
If you need to subset the columns of a DESeqDataSet, i.e., when removing certain samples from the analysis, it is possible that all the samples for one or more levels of a variable in the design formula would be removed. In this case, the droplevels function can be used to remove those levels which do not have samples in the current DESeqDataSet:
dds$condition <- droplevels(dds$condition)
DESeq2 provides a function collapseReplicates which can assist in combining the counts from technical replicates into single columns of the count matrix. The term technical replicate implies multiple sequencing runs of the same library. You should not collapse biological replicates using this function. See the manual page for an example of the use of collapseReplicates.
We continue with the pasilla data constructed from the count matrix method above. This data set is from an experiment on Drosophila melanogaster cell cultures and investigated the effect of RNAi knock-down of the splicing factor pasilla. The detailed transcript of the production of the pasilla data is provided in the vignette of the data package pasilla.
The standard differential expression analysis steps are wrapped into
a single function, DESeq. The estimation steps performed by
this function are described in the manual page for ?DESeq
and in the Methods section of the DESeq2 publication.
Results tables are generated using the function results,
which extracts a results table with log2 fold changes, p values
and adjusted p values. With no additional arguments to
results, the log2 fold change and Wald test p value
will be for the last variable in the design formula,
and if this is a factor, the comparison will be the last
level of this variable over the reference
level (see previous note on factor
levels). However, the order of the variables of the design do not
matter so long as the user specifies the comparison to build a results
table for, using the name
or contrast
arguments of results.
Details about the comparison are printed to the console, directly
above the results table. The text,
condition treated vs untreated
, tells you that the
estimates are of the logarithmic fold change
log2(treated/untreated).
dds <- DESeq(dds)
res <- results(dds)
res
## log2 fold change (MLE): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 8148 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue padj
## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.28865 0.00399148 0.225010 0.0177391 0.9858470 0.996699
## FBgn0000017 4359.09632 -0.23842494 0.127094 -1.8759764 0.0606585 0.289604
## FBgn0000018 419.06811 -0.10185506 0.146568 -0.6949338 0.4870968 0.822681
## FBgn0000024 6.41105 0.21429657 0.691557 0.3098756 0.7566555 0.939146
## FBgn0000032 990.79225 -0.08896298 0.146253 -0.6082822 0.5430003 0.848881
## ... ... ... ... ... ... ...
## FBgn0261564 1160.028 -0.0857255 0.108354 -0.7911643 0.4288481 0.789246
## FBgn0261565 620.388 -0.2943294 0.140496 -2.0949303 0.0361772 0.206423
## FBgn0261570 3212.969 0.2971841 0.126742 2.3447877 0.0190379 0.133380
## FBgn0261573 2243.936 0.0146611 0.111365 0.1316493 0.8952617 0.977565
## FBgn0261574 4863.807 0.0179729 0.194137 0.0925784 0.9262385 0.986726
Note that we could have specified the coefficient or contrast we want to build a results table for, using either of the following equivalent commands:
res <- results(dds, name="condition_treated_vs_untreated")
res <- results(dds, contrast=c("condition","treated","untreated"))
One exception to the equivalence of these two commands, is that,
using contrast
will additionally set to 0 the estimated LFC
in a comparison of two groups, where all of the counts in the two groups
are equal to 0 (while other groups have positive counts). As this may be
a desired feature to have the LFC in these cases set to 0, one can use
contrast
to build these results tables. More information
about extracting specific coefficients from a fitted
DESeqDataSet object can be found in the help page
?results
. The use of the contrast
argument is
also further discussed below.
Shrinkage of effect size (LFC estimates) is useful for visualization
and ranking of genes. To shrink the LFC, we pass the dds
object to the function lfcShrink
. Below we specify to use
the apeglm method for effect size shrinkage, which improves on
the previous estimator.
We provide the dds
object and the name or number of the
coefficient we want to shrink, where the number refers to the order of
the coefficient as it appears in resultsNames(dds)
.
resultsNames(dds)
## [1] "Intercept" "condition_treated_vs_untreated"
resLFC <- lfcShrink(dds, coef="condition_treated_vs_untreated", type="apeglm")
## using 'apeglm' for LFC shrinkage. If used in published research, please cite:
## Zhu, A., Ibrahim, J.G., Love, M.I. (2018) Heavy-tailed prior distributions for
## sequence count data: removing the noise and preserving large differences.
## Bioinformatics. https://doi.org/10.1093/bioinformatics/bty895
resLFC
## log2 fold change (MAP): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 8148 rows and 5 columns
## baseMean log2FoldChange lfcSE pvalue padj
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.28865 0.00195376 0.152654 0.9858470 0.996699
## FBgn0000017 4359.09632 -0.18810628 0.120870 0.0606585 0.289604
## FBgn0000018 419.06811 -0.06893831 0.122805 0.4870968 0.822681
## FBgn0000024 6.41105 0.01786546 0.199499 0.7566555 0.939146
## FBgn0000032 990.79225 -0.06001511 0.121962 0.5430003 0.848881
## ... ... ... ... ... ...
## FBgn0261564 1160.028 -0.0669829 0.0976567 0.4288481 0.789246
## FBgn0261565 620.388 -0.2284564 0.1362122 0.0361772 0.206423
## FBgn0261570 3212.969 0.2395981 0.1237304 0.0190379 0.133380
## FBgn0261573 2243.936 0.0115395 0.0981689 0.8952617 0.977565
## FBgn0261574 4863.807 0.0101618 0.1417667 0.9262385 0.986726
Shrinkage estimation is discussed more in the complete original vignette.
We can order our results table by the smallest p value:
resOrdered <- res[order(res$pvalue),]
We can summarize some basic tallies using the summary function.
summary(res)
##
## out of 8148 with nonzero total read count
## adjusted p-value < 0.1
## LFC > 0 (up) : 533, 6.5%
## LFC < 0 (down) : 536, 6.6%
## outliers [1] : 0, 0%
## low counts [2] : 0, 0%
## (mean count < 5)
## [1] see 'cooksCutoff' argument of ?results
## [2] see 'independentFiltering' argument of ?results
How many adjusted p-values were less than 0.1?
sum(res$padj < 0.1, na.rm=TRUE)
## [1] 1069
The results function contains a number of arguments to
customize the results table which is generated. You can read about these
arguments by looking up ?results
. Note that the
results function automatically performs independent filtering
based on the mean of normalized counts for each gene, optimizing the
number of genes which will have an adjusted p value below a
given FDR cutoff, alpha
. Independent filtering is further
discussed in the complete original vignette. By default the argument
alpha
is set to \(0.1\).
If the adjusted p value cutoff will be a value other than \(0.1\), alpha
should be set to
that value:
res05 <- results(dds, alpha=0.05)
summary(res05)
##
## out of 8148 with nonzero total read count
## adjusted p-value < 0.05
## LFC > 0 (up) : 416, 5.1%
## LFC < 0 (down) : 437, 5.4%
## outliers [1] : 0, 0%
## low counts [2] : 0, 0%
## (mean count < 5)
## [1] see 'cooksCutoff' argument of ?results
## [2] see 'independentFiltering' argument of ?results
sum(res05$padj < 0.05, na.rm=TRUE)
## [1] 853
A generalization of the idea of p value filtering is to weight hypotheses to optimize power. A Bioconductor package, IHW, is available that implements the method of Independent Hypothesis Weighting. Here we show the use of IHW for p value adjustment of DESeq2 results. For more details, please see the vignette of the IHW package. The IHW result object is stored in the metadata.
Note: If the results of independent hypothesis weighting are used in published research, please cite:
Ignatiadis, N., Klaus, B., Zaugg, J.B., Huber, W. (2016) Data-driven hypothesis weighting increases detection power in genome-scale multiple testing. Nature Methods, 13:7. 10.1038/nmeth.3885
# (unevaluated code chunk)
library(IHW)
resIHW <- results(dds, filterFun=ihw)
summary(resIHW)
sum(resIHW$padj < 0.1, na.rm=TRUE)
metadata(resIHW)$ihwResult
For advanced users, note that all the values calculated by the DESeq2 package are stored in the DESeqDataSet object or the DESeqResults object, and access to these values is discussed in the complete original vignette.
In DESeq2, the function plotMA shows the log2 fold changes attributable to a given variable over the mean of normalized counts for all the samples in the DESeqDataSet. Points will be colored blue if the adjusted p value is less than 0.1. Points which fall out of the window are plotted as open triangles pointing either up or down.
plotMA(res, ylim=c(-2,2))
It is more useful to visualize the MA-plot for the shrunken log2 fold changes, which remove the noise associated with log2 fold changes from low count genes without requiring arbitrary filtering thresholds.
plotMA(resLFC, ylim=c(-2,2))
After calling plotMA, one can use the function identify to interactively detect the row number of individual genes by clicking on the plot. One can then recover the gene identifiers by saving the resulting indices:
idx <- identify(res$baseMean, res$log2FoldChange)
rownames(res)[idx]
The moderated log fold changes proposed by Love et al. (2014) use a
normal prior distribution, centered on zero and with a scale that is fit
to the data. The shrunken log fold changes are useful for ranking and
visualization, without the need for arbitrary filters on low count
genes. The normal prior can sometimes produce too strong of shrinkage
for certain datasets. In DESeq2 version 1.18, we include two additional
adaptive shrinkage estimators, available via the type
argument of lfcShrink
. For more details, see
?lfcShrink
The options for type
are:
apeglm
is the adaptive t prior shrinkage estimator from
the apeglm
package. As of version 1.28.0, it is the default estimator.ashr
is the adaptive shrinkage estimator from the ashr package. Here DESeq2
uses the ashr option to fit a mixture of Normal distributions to form
the prior, with method="shrinkage"
.normal
is the the original DESeq2 shrinkage estimator,
an adaptive Normal distribution as prior.If the shrinkage estimator apeglm
is used in published
research, please cite:
Zhu, A., Ibrahim, J.G., Love, M.I. (2018) Heavy-tailed prior distributions for sequence count data: removing the noise and preserving large differences. Bioinformatics. 10.1093/bioinformatics/bty895
If the shrinkage estimator ashr
is used in published
research, please cite:
Stephens, M. (2016) False discovery rates: a new deal. Biostatistics, 18:2. 10.1093/biostatistics/kxw041
In the LFC shrinkage code above, we specified
coef="condition_treated_vs_untreated"
. We can also just
specify the coefficient by the order that it appears in
resultsNames(dds)
, in this case coef=2
. For
more details explaining how the shrinkage estimators differ, and what
kinds of designs, contrasts and output is provided by each, see the extended section on shrinkage estimators.
resultsNames(dds)
## [1] "Intercept" "condition_treated_vs_untreated"
# because we are interested in treated vs untreated, we set 'coef=2'
resNorm <- lfcShrink(dds, coef=2, type="normal")
## using 'normal' for LFC shrinkage, the Normal prior from Love et al (2014).
##
## Note that type='apeglm' and type='ashr' have shown to have less bias than type='normal'.
## See ?lfcShrink for more details on shrinkage type, and the DESeq2 vignette.
## Reference: https://doi.org/10.1093/bioinformatics/bty895
resAsh <- lfcShrink(dds, coef=2, type="ashr")
## using 'ashr' for LFC shrinkage. If used in published research, please cite:
## Stephens, M. (2016) False discovery rates: a new deal. Biostatistics, 18:2.
## https://doi.org/10.1093/biostatistics/kxw041
par(mfrow=c(1,3), mar=c(4,4,2,1))
xlim <- c(1,1e5); ylim <- c(-3,3)
plotMA(resLFC, xlim=xlim, ylim=ylim, main="apeglm")
plotMA(resNorm, xlim=xlim, ylim=ylim, main="normal")
plotMA(resAsh, xlim=xlim, ylim=ylim, main="ashr")
Note: We have sped up the apeglm
method
so it takes roughly about the same amount of time as
normal
, e.g. ~5 seconds for the pasilla
dataset of ~10,000 genes and 7 samples. If fast shrinkage estimation of
LFC is needed, but the posterior standard deviation is not
needed, setting apeMethod="nbinomC"
will produce a
~10x speedup, but the lfcSE
column will be returned with
NA
. A variant of this fast method,
apeMethod="nbinomC*"
includes random starts.
Note: If there is unwanted variation present in the
data (e.g. batch effects) it is always recommend to correct for this,
which can be accommodated in DESeq2 by including in the design any known
batch variables or by using functions/packages such as
svaseq
in sva or the
RUV
functions in RUVSeq to estimate
variables that capture the unwanted variation. In addition, the ashr
developers have a specific
method for accounting for unwanted variation in combination with
ashr.
It can also be useful to examine the counts of reads for a single
gene across the groups. A simple function for making this plot is
plotCounts, which normalizes counts by the estimated size
factors (or normalization factors if these were used) and adds a
pseudocount of 1/2 to allow for log scale plotting. The counts are
grouped by the variables in intgroup
, where more than one
variable can be specified. Here we specify the gene which had the
smallest p value from the results table created above. You can
select the gene to plot by rowname or by numeric index.
plotCounts(dds, gene=which.min(res$padj), intgroup="condition")
For customized plotting, an argument returnData
specifies that the function should only return a data.frame for
plotting with ggplot.
d <- plotCounts(dds, gene=which.min(res$padj), intgroup="condition",
returnData=TRUE)
library(ggplot2)
ggplot(d, aes(x=condition, y=count)) +
geom_point(position=position_jitter(w=0.1,h=0)) +
scale_y_log10(breaks=c(25,100,400))
Information about which variables and tests were used can be found by calling the function mcols on the results object.
mcols(res)$description
## [1] "mean of normalized counts for all samples"
## [2] "log2 fold change (MLE): condition treated vs untreated"
## [3] "standard error: condition treated vs untreated"
## [4] "Wald statistic: condition treated vs untreated"
## [5] "Wald test p-value: condition treated vs untreated"
## [6] "BH adjusted p-values"
For a particular gene, a log2 fold change of -1 for
condition treated vs untreated
means that the treatment
induces a multiplicative change in observed gene expression level of
\(2^{-1} = 0.5\) compared to the
untreated condition. If the variable of interest is continuous-valued,
then the reported log2 fold change is per unit of change of that
variable.
Note on p-values set to NA: some values in the
results table can be set to NA
for one of the following
reasons:
baseMean
column will be zero, and the log2 fold change
estimates, p value and adjusted p value will all be
set to NA
.NA
. These outlier counts are detected by Cook’s distance.
Customization of this outlier filtering and description of functionality
for replacement of outlier counts and refitting is described in the
original vignette.NA
.regionReport An HTML and PDF summary of the results with plots can also be generated using the regionReport package. The DESeq2Report function should be run on a DESeqDataSet that has been processed by the DESeq function. For more details see the manual page for DESeq2Report and an example vignette in the regionReport package.
Glimma Interactive visualization of DESeq2 output, including MA-plots (also called MD-plots) can be generated using the Glimma package. See the manual page for glMDPlot.DESeqResults.
pcaExplorer Interactive visualization of DESeq2 output, including PCA plots, boxplots of counts and other useful summaries can be generated using the pcaExplorer package. See the Launching the application section of the package vignette.
iSEE Provides functions for creating an interactive Shiny-based graphical user interface for exploring data stored in SummarizedExperiment objects, including row- and column-level metadata. Particular attention is given to single-cell data in a SingleCellExperiment object with visualization of dimensionality reduction results. iSEE is on Bioconductor. An example wrapper function for converting a DESeqDataSet to a SingleCellExperiment object for use with iSEE can be found at the following gist, written by Federico Marini:
The iSEEde package provides additional panels that facilitate the interactive visualisation of differential expression results in iSEE applications.
DEvis DEvis is a powerful, integrated solution for the analysis of differential expression data. This package includes an array of tools for manipulating and aggregating data, as well as a wide range of customizable visualizations, and project management functionality that simplify RNA-Seq analysis and provide a variety of ways of exploring and analyzing data. DEvis can be found on CRAN and GitHub.
A plain-text file of the results can be exported using the base R functions write.csv or write.delim. We suggest using a descriptive file name indicating the variable and levels which were tested.
write.csv(as.data.frame(resOrdered),
file="condition_treated_results.csv")
Exporting only the results which pass an adjusted p value threshold can be accomplished with the subset function, followed by the write.csv function.
resSig <- subset(resOrdered, padj < 0.1)
resSig
## log2 fold change (MLE): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 1069 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0039155 730.992 -4.61695 0.1667827 -27.6824 1.13645e-168
## FBgn0025111 1504.272 2.90205 0.1261989 22.9958 5.13139e-117
## FBgn0029167 3709.741 -2.19491 0.0957474 -22.9240 2.68013e-116
## FBgn0003360 4344.597 -3.17683 0.1416166 -22.4326 1.89322e-111
## FBgn0035085 638.757 -2.55819 0.1359972 -18.8106 6.18406e-79
## ... ... ... ... ... ...
## FBgn0003890 1644.2002 -0.495185 0.199269 -2.48501 0.0129549
## FBgn0010053 178.3259 -0.516894 0.208034 -2.48466 0.0129675
## FBgn0034997 5125.8312 0.250348 0.100801 2.48360 0.0130063
## FBgn0004359 84.1178 0.646359 0.260308 2.48306 0.0130260
## FBgn0027604 283.2465 -0.578675 0.233077 -2.48277 0.0130366
## padj
## <numeric>
## FBgn0039155 9.25983e-165
## FBgn0025111 2.09053e-113
## FBgn0029167 7.27923e-113
## FBgn0003360 3.85649e-108
## FBgn0035085 1.00775e-75
## ... ...
## FBgn0003890 0.0991143
## FBgn0010053 0.0991176
## FBgn0034997 0.0993210
## FBgn0004359 0.0993659
## FBgn0027604 0.0993659
Experiments with more than one factor influencing the counts can be analyzed using design formulas that include the additional variables. In fact, DESeq2 can analyze any possible experimental design that can be expressed with fixed effects terms (multiple factors, designs with interactions, designs with continuous variables, splines, and so on are all possible).
By adding variables to the design, one can control for additional
variation in the counts. For example, if the condition samples are
balanced across experimental batches, by including the
batch
factor to the design, one can increase the
sensitivity for finding differences due to condition
. There
are multiple ways to analyze experiments when the additional variables
are of interest and not just controlling factors (see section on interactions).
Experiments with many samples: in experiments with many samples (e.g. 50, 100, etc.) it is highly likely that there will be technical variation affecting the observed counts. Failing to model this additional technical variation will lead to spurious results. Many methods exist that can be used to model technical variation, which can be easily included in the DESeq2 design to control for technical variation while estimating effects of interest. See the RNA-seq workflow for examples of using RUV or SVA in combination with DESeq2. For more details on why it is important to control for technical variation in large sample experiments, see the following thread, also archived here by Frederik Ziebell.
The data in the pasilla package have
a condition of interest (the column condition
), as well as
information on the type of sequencing which was performed (the column
type
), as we can see below:
colData(dds)
## DataFrame with 7 rows and 3 columns
## condition type sizeFactor
## <factor> <factor> <numeric>
## treated1 treated single-read 1.629707
## treated2 treated paired-end 0.761162
## treated3 treated paired-end 0.830312
## untreated1 untreated single-read 1.143904
## untreated2 untreated single-read 1.791281
## untreated3 untreated paired-end 0.645994
## untreated4 untreated paired-end 0.750728
We create a copy of the DESeqDataSet, so that we can rerun the analysis using a multi-factor design.
ddsMF <- dds
We change the levels of type
so it only contains letters
(numbers, underscore and period are also allowed in design factor
levels). Be careful when changing level names to use the same order as
the current levels.
levels(ddsMF$type)
## [1] "paired-end" "single-read"
levels(ddsMF$type) <- sub("-.*", "", levels(ddsMF$type))
levels(ddsMF$type)
## [1] "paired" "single"
We can account for the different types of sequencing, and get a
clearer picture of the differences attributable to the treatment. As
condition
is the variable of interest, we put it at the end
of the formula. Thus the results function will by default pull
the condition
results unless contrast
or
name
arguments are specified.
Then we can re-run DESeq:
design(ddsMF) <- formula(~ type + condition)
ddsMF <- DESeq(ddsMF)
Again, we access the results using the results function.
resMF <- results(ddsMF)
head(resMF)
## log2 fold change (MLE): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 6 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue padj
## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.28865 -0.0390130 0.218997 -0.178144 0.8586100 0.947833
## FBgn0000017 4359.09632 -0.2548984 0.113535 -2.245099 0.0247617 0.131475
## FBgn0000018 419.06811 -0.0625571 0.129956 -0.481372 0.6302523 0.852180
## FBgn0000024 6.41105 0.3097331 0.750231 0.412850 0.6797164 0.877741
## FBgn0000032 990.79225 -0.0465134 0.120215 -0.386918 0.6988171 0.886082
## FBgn0000037 14.11443 0.4541562 0.523436 0.867644 0.3855893 0.691941
It is also possible to retrieve the log2 fold changes, p
values and adjusted p values of variables other than the last
one in the design. While in this case, type
is not
biologically interesting as it indicates differences across sequencing
protocol, for other hypothetical designs, such as
~genotype + condition + genotype:condition
, we may actually
be interested in the difference in baseline expression across genotype,
which is not the last variable in the design.
In any case, the contrast
argument of the function
results takes a character vector of length three: the name of
the variable, the name of the factor level for the numerator of the log2
ratio, and the name of the factor level for the denominator. The
contrast
argument can also take other forms, as described
in the help page for results and below
resMFType <- results(ddsMF,
contrast=c("type", "single", "paired"))
head(resMFType)
## log2 fold change (MLE): type single vs paired
## Wald test p-value: type single vs paired
## DataFrame with 6 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue padj
## <numeric> <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.28865 -0.265123 0.217459 -1.219189 0.2227725 0.492729
## FBgn0000017 4359.09632 -0.103203 0.113399 -0.910090 0.3627748 0.642817
## FBgn0000018 419.06811 0.225857 0.128864 1.752669 0.0796589 0.271610
## FBgn0000024 6.41105 0.302083 0.745703 0.405099 0.6854049 NA
## FBgn0000032 990.79225 0.233891 0.119646 1.954855 0.0506002 0.206081
## FBgn0000037 14.11443 -0.053260 0.521939 -0.102043 0.9187228 0.969866
If the variable is continuous or an interaction term (see section on interactions) then the results can
be extracted using the name
argument to results,
where the name is one of elements returned by
resultsNames(dds)
.
In order to test for differential expression, we operate on raw counts and use discrete distributions as described in the previous section on differential expression. However for other downstream analyses – e.g. for visualization or clustering – it might be useful to work with transformed versions of the count data.
Maybe the most obvious choice of transformation is the logarithm. Since count values for a gene can be zero in some conditions (and non-zero in others), some advocate the use of pseudocounts, i.e. transformations of the form:
\[ y = \log_2(n + n_0) \]
where n represents the count values and \(n_0\) is a positive constant.
In this section, we discuss two alternative approaches that offer more theoretical justification and a rational way of choosing parameters equivalent to \(n_0\) above. One makes use of the concept of variance stabilizing transformations (VST), and the other is the regularized logarithm or rlog, which incorporates a prior on the sample differences. Both transformations produce transformed data on the log2 scale which has been normalized with respect to library size or other normalization factors.
The point of these two transformations, the VST and the rlog, is to remove the dependence of the variance on the mean, particularly the high variance of the logarithm of count data when the mean is low. Both VST and rlog use the experiment-wide trend of variance over mean, in order to transform the data to remove the experiment-wide trend. Note that we do not require or desire that all the genes have exactly the same variance after transformation. Indeed, in a figure below, you will see that after the transformations the genes with the same mean do not have exactly the same standard deviations, but that the experiment-wide trend has flattened. It is those genes with row variance above the trend which will allow us to cluster samples into interesting groups.
Note on running time: if you have many samples (e.g. 100s), the rlog function might take too long, and so the vst function will be a faster choice. The rlog and VST have similar properties, but the rlog requires fitting a shrinkage term for each sample and each gene which takes time. See the DESeq2 paper for more discussion on the differences.
The two functions, vst and rlog have an argument
blind
, for whether the transformation should be blind to
the sample information specified by the design formula. When
blind
equals TRUE
(the default), the functions
will re-estimate the dispersions using only an intercept. This setting
should be used in order to compare samples in a manner wholly unbiased
by the information about experimental groups, for example to perform
sample QA (quality assurance) as demonstrated below.
However, blind dispersion estimation is not the appropriate choice if
one expects that many or the majority of genes (rows) will have large
differences in counts which are explainable by the experimental design,
and one wishes to transform the data for downstream analysis. In this
case, using blind dispersion estimation will lead to large estimates of
dispersion, as it attributes differences due to experimental design as
unwanted noise, and will result in overly shrinking the
transformed values towards each other. By setting blind
to
FALSE
, the dispersions already estimated will be used to
perform transformations, or if not present, they will be estimated using
the current design formula. Note that only the fitted dispersion
estimates from mean-dispersion trend line are used in the transformation
(the global dependence of dispersion on mean for the entire experiment).
So setting blind
to FALSE
is still for the
most part not using the information about which samples were in which
experimental group in applying the transformation.
These transformation functions return an object of class
DESeqTransform which is a subclass of
RangedSummarizedExperiment. For ~20 samples, running on a newly
created DESeqDataSet
, rlog may take 30 seconds,
while vst takes less than 1 second. The running times are
shorter when using blind=FALSE
and if the function
DESeq has already been run, because then it is not necessary to
re-estimate the dispersion values. The assay function is used
to extract the matrix of normalized values.
vsd <- vst(dds, blind=FALSE)
rld <- rlog(dds, blind=FALSE)
head(assay(vsd), 3)
## treated1 treated2 treated3 untreated1 untreated2 untreated3
## FBgn0000008 7.777746 7.984491 7.765448 7.735026 7.807720 7.997378
## FBgn0000017 11.954934 12.035700 12.028610 12.049838 12.295662 12.471723
## FBgn0000018 9.219162 9.089390 9.028791 9.374577 9.173538 9.052143
## untreated4
## FBgn0000008 7.832458
## FBgn0000017 12.089675
## FBgn0000018 9.142848
Above, we used a parametric fit for the dispersion. In this case, the
closed-form expression for the variance stabilizing transformation is
used by the vst function. If a local fit is used (option
fitType="locfit"
to estimateDispersions) a
numerical integration is used instead. The transformed data should be
approximated variance stabilized and also includes correction for size
factors or normalization factors. The transformed data is on the log2
scale for large counts.
The function rlog, stands for regularized log, transforming the original count data to the log2 scale by fitting a model with a term for each sample and a prior distribution on the coefficients which is estimated from the data. This is the same kind of shrinkage (sometimes referred to as regularization, or moderation) of log fold changes used by DESeq and nbinomWaldTest. The resulting data contains elements defined as:
\[ \log_2(q_{ij}) = \beta_{i0} + \beta_{ij} \]
where \(q_{ij}\) is a parameter proportional to the expected true concentration of fragments for gene i and sample j (see formula in the original vignette), \(\beta_{i0}\) is an intercept which does not undergo shrinkage, and \(\beta_{ij}\) is the sample-specific effect which is shrunk toward zero based on the dispersion-mean trend over the entire dataset. The trend typically captures high dispersions for low counts, and therefore these genes exhibit higher shrinkage from the rlog.
Note that, as \(q_{ij}\) represents the part of the mean value \(\mu_{ij}\) after the size factor \(s_j\) has been divided out, it is clear that the rlog transformation inherently accounts for differences in sequencing depth. Without priors, this design matrix would lead to a non-unique solution, however the addition of a prior on non-intercept betas allows for a unique solution to be found.
The figure below plots the standard deviation of the transformed data, across samples, against the mean, using the shifted logarithm transformation, the regularized log transformation and the variance stabilizing transformation. The shifted logarithm has elevated standard deviation in the lower count range, and the regularized log to a lesser extent, while for the variance stabilized data the standard deviation is roughly constant along the whole dynamic range.
Note that the vertical axis in such plots is the square root of the variance over all samples, so including the variance due to the experimental conditions. While a flat curve of the square root of variance over the mean may seem like the goal of such transformations, this may be unreasonable in the case of datasets with many true differences due to the experimental conditions.
# this gives log2(n + 1)
ntd <- normTransform(dds)
library(vsn)
meanSdPlot(assay(ntd))
meanSdPlot(assay(vsd))
meanSdPlot(assay(rld))
Data quality assessment and quality control (i.e. the removal of insufficiently good data) are essential steps of any data analysis. These steps should typically be performed very early in the analysis of a new data set, preceding or in parallel to the differential expression testing.
We define the term quality as fitness for purpose. Our purpose is the detection of differentially expressed genes, and we are looking in particular for samples whose experimental treatment suffered from an anormality that renders the data points obtained from these particular samples detrimental to our purpose.
To explore a count matrix, it is often instructive to look at it as a heatmap. Below we show how to produce such a heatmap for various transformations of the data.
library(pheatmap)
select <- order(rowMeans(counts(dds,normalized=TRUE)),
decreasing=TRUE)[1:20]
df <- as.data.frame(colData(dds)[,c("condition","type")])
pheatmap(assay(ntd)[select,], cluster_rows=FALSE, show_rownames=FALSE,
cluster_cols=FALSE, annotation_col=df)
pheatmap(assay(vsd)[select,], cluster_rows=FALSE, show_rownames=FALSE,
cluster_cols=FALSE, annotation_col=df)
pheatmap(assay(rld)[select,], cluster_rows=FALSE, show_rownames=FALSE,
cluster_cols=FALSE, annotation_col=df)
Another use of the transformed data is sample clustering. Here, we apply the dist function to the transpose of the transformed count matrix to get sample-to-sample distances.
sampleDists <- dist(t(assay(vsd)))
A heatmap of this distance matrix gives us an overview over
similarities and dissimilarities between samples. We have to provide a
hierarchical clustering hc
to the heatmap function based on
the sample distances, or else the heatmap function would calculate a
clustering based on the distances between the rows/columns of the
distance matrix.
library(RColorBrewer)
sampleDistMatrix <- as.matrix(sampleDists)
rownames(sampleDistMatrix) <- paste(vsd$condition, vsd$type, sep="-")
colnames(sampleDistMatrix) <- NULL
colors <- colorRampPalette( rev(brewer.pal(9, "Blues")) )(255)
pheatmap(sampleDistMatrix,
clustering_distance_rows=sampleDists,
clustering_distance_cols=sampleDists,
col=colors)
Related to the distance matrix is the PCA plot, which shows the samples in the 2D plane spanned by their first two principal components. This type of plot is useful for visualizing the overall effect of experimental covariates and batch effects.
plotPCA(vsd, intgroup=c("condition", "type"))
## using ntop=500 top features by variance
It is also possible to customize the PCA plot using the ggplot function.
pcaData <- plotPCA(vsd, intgroup=c("condition", "type"), returnData=TRUE)
## using ntop=500 top features by variance
percentVar <- round(100 * attr(pcaData, "percentVar"))
ggplot(pcaData, aes(PC1, PC2, color=condition, shape=type)) +
geom_point(size=3) +
xlab(paste0("PC1: ",percentVar[1],"% variance")) +
ylab(paste0("PC2: ",percentVar[2],"% variance")) +
coord_fixed()
At this point, the vignette continues with more details, including discussion of contrasts, interactions, the theory behind the method, and other very interesting points. We do not have time to cover them in this lesson today but it is worth spending additional time on.