---
output:
html_document
bibliography: ref.bib
---
# Quality Control
## Motivation {#quality-control-motivation}
Low-quality libraries in scRNA-seq data can arise from a variety of sources such as cell damage during dissociation or failure in library preparation (e.g., inefficient reverse transcription or PCR amplification).
These usually manifest as "cells" with low total counts, few expressed genes and high mitochondrial or spike-in proportions.
These low-quality libraries are problematic as they can contribute to misleading results in downstream analyses:
- They form their own distinct cluster(s), complicating interpretation of the results.
This is most obviously driven by increased mitochondrial proportions or enrichment for nuclear RNAs after cell damage.
In the worst case, low-quality libraries generated from different cell types can cluster together based on similarities in the damage-induced expression profiles, creating artificial intermediate states or trajectories between otherwise distinct subpopulations.
Additionally, very small libraries can form their own clusters due to shifts in the mean upon transformation [@lun2018overcoming].
- They interfere with characterization of population heterogeneity during variance estimation or principal components analysis.
The first few principal components will capture differences in quality rather than biology, reducing the effectiveness of dimensionality reduction.
Similarly, genes with the largest variances will be driven by differences between low- and high-quality cells.
The most obvious example involves low-quality libraries with very low counts where scaling normalization inflates the apparent variance of genes that happen to have a non-zero count in those libraries.
- They contain genes that appear to be strongly "upregulated" due to aggressive scaling to normalize for small library sizes.
This is most problematic for contaminating transcripts (e.g., from the ambient solution) that are present in all libraries at low but constant levels.
Increased scaling in low-quality libraries transforms small counts for these transcripts in large normalized expression values, resulting in apparent upregulation compared to other cells.
This can be misleading as the affected genes are often biologically sensible but are actually expressed in another subpopulation.
To avoid - or at least mitigate - these problems, we need to remove the problematic cells at the start of the analysis.
This step is commonly referred to as quality control (QC) on the cells.
(We will use "library" and "cell" rather interchangeably here, though the distinction will become important when dealing with droplet-based data.)
We demonstrate using a small scRNA-seq dataset from @lun2017assessing, which is provided with no prior QC so that we can apply our own procedures.
```r
sce.416b
```
```
## class: SingleCellExperiment
## dim: 46604 192
## metadata(0):
## assays(1): counts
## rownames(46604): ENSMUSG00000102693 ENSMUSG00000064842 ...
## ENSMUSG00000095742 CBFB-MYH11-mcherry
## rowData names(1): Length
## colnames(192): SLX-9555.N701_S502.C89V9ANXX.s_1.r_1
## SLX-9555.N701_S503.C89V9ANXX.s_1.r_1 ...
## SLX-11312.N712_S508.H5H5YBBXX.s_8.r_1
## SLX-11312.N712_S517.H5H5YBBXX.s_8.r_1
## colData names(9): Source Name cell line ... spike-in addition block
## reducedDimNames(0):
## mainExpName: endogenous
## altExpNames(2): ERCC SIRV
```
## Common choices of QC metrics
We use several common QC metrics to identify low-quality cells based on their expression profiles.
These metrics are described below in terms of reads for SMART-seq2 data, but the same definitions apply to UMI data generated by other technologies like MARS-seq and droplet-based protocols.
- The library size is defined as the total sum of counts across all relevant features for each cell.
Here, we will consider the relevant features to be the endogenous genes.
Cells with small library sizes are of low quality as the RNA has been lost at some point during library preparation,
either due to cell lysis or inefficient cDNA capture and amplification.
- The number of expressed features in each cell is defined as the number of endogenous genes with non-zero counts for that cell.
Any cell with very few expressed genes is likely to be of poor quality as the diverse transcript population has not been successfully captured.
- The proportion of reads mapped to spike-in transcripts is calculated relative to the total count across all features (including spike-ins) for each cell.
As the same amount of spike-in RNA should have been added to each cell, any enrichment in spike-in counts is symptomatic of loss of endogenous RNA.
Thus, high proportions are indicative of poor-quality cells where endogenous RNA has been lost due to, e.g., partial cell lysis or RNA degradation during dissociation.
- In the absence of spike-in transcripts, the proportion of reads mapped to genes in the mitochondrial genome can be used.
High proportions are indicative of poor-quality cells [@islam2014quantitative;@ilicic2016classification], presumably because of loss of cytoplasmic RNA from perforated cells.
The reasoning is that, in the presence of modest damage, the holes in the cell membrane permit efflux of individual transcript molecules but are too small to allow mitochondria to escape, leading to a relative enrichment of mitochondrial transcripts.
For single-nuclei RNA-seq experiments, high proportions are also useful as they can mark cells where the cytoplasm has not been successfully stripped.
For each cell, we calculate these QC metrics using the `perCellQCMetrics()` function from the *[scater](https://bioconductor.org/packages/3.16/scater)* package [@mccarthy2017scater].
The `sum` column contains the total count for each cell and the `detected` column contains the number of detected genes.
The `subsets_Mito_percent` column contains the percentage of reads mapped to mitochondrial transcripts.
Finally, the `altexps_ERCC_percent` column contains the percentage of reads mapped to ERCC transcripts.
```r
# Identifying the mitochondrial transcripts in our SingleCellExperiment.
location <- rowRanges(sce.416b)
is.mito <- any(seqnames(location)=="MT")
library(scuttle)
df <- perCellQCMetrics(sce.416b, subsets=list(Mito=is.mito))
summary(df$sum)
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 27084 856350 1111252 1165865 1328301 4398883
```
```r
summary(df$detected)
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 5609 7502 8341 8397 9208 11380
```
```r
summary(df$subsets_Mito_percent)
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 4.59 7.29 8.14 8.15 9.04 15.62
```
```r
summary(df$altexps_ERCC_percent)
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 2.24 4.29 6.03 6.41 8.13 19.43
```
Alternatively, users may prefer to use the `addPerCellQC()` function.
This computes and appends the per-cell QC statistics to the `colData` of the `SingleCellExperiment` object,
allowing us to retain all relevant information in a single object for later manipulation.
```r
sce.416b <- addPerCellQCMetrics(sce.416b, subsets=list(Mito=is.mito))
colnames(colData(sce.416b))
```
```
## [1] "Source Name" "cell line"
## [3] "cell type" "single cell well quality"
## [5] "genotype" "phenotype"
## [7] "strain" "spike-in addition"
## [9] "block" "sum"
## [11] "detected" "subsets_Mito_sum"
## [13] "subsets_Mito_detected" "subsets_Mito_percent"
## [15] "altexps_ERCC_sum" "altexps_ERCC_detected"
## [17] "altexps_ERCC_percent" "altexps_SIRV_sum"
## [19] "altexps_SIRV_detected" "altexps_SIRV_percent"
## [21] "total"
```
A key assumption here is that the QC metrics are independent of the biological state of each cell.
Poor values (e.g., low library sizes, high mitochondrial proportions) are presumed to be driven by technical factors rather than biological processes, meaning that the subsequent removal of cells will not misrepresent the biology in downstream analyses.
Major violations of this assumption would potentially result in the loss of cell types that have, say, systematically low RNA content or high numbers of mitochondria.
We can check for such violations using diagnostic plots described in Section \@ref(quality-control-plots) and [Advanced Section 1.5](http://bioconductor.org/books/3.16/OSCA.advanced/quality-control-redux.html#qc-discard-cell-types).
## Identifying low-quality cells
### With fixed thresholds {#fixed-qc}
The simplest approach to identifying low-quality cells involves applying fixed thresholds to the QC metrics.
For example, we might consider cells to be low quality if they have library sizes below 100,000 reads; express fewer than 5,000 genes; have spike-in proportions above 10%; or have mitochondrial proportions above 10%.
```r
qc.lib <- df$sum < 1e5
qc.nexprs <- df$detected < 5e3
qc.spike <- df$altexps_ERCC_percent > 10
qc.mito <- df$subsets_Mito_percent > 10
discard <- qc.lib | qc.nexprs | qc.spike | qc.mito
# Summarize the number of cells removed for each reason.
DataFrame(LibSize=sum(qc.lib), NExprs=sum(qc.nexprs),
SpikeProp=sum(qc.spike), MitoProp=sum(qc.mito), Total=sum(discard))
```
```
## DataFrame with 1 row and 5 columns
## LibSize NExprs SpikeProp MitoProp Total
##
## 1 3 0 19 14 33
```
While simple, this strategy requires considerable experience to determine appropriate thresholds for each experimental protocol and biological system.
Thresholds for read count-based data are not applicable for UMI-based data, and vice versa.
Differences in mitochondrial activity or total RNA content require constant adjustment of the mitochondrial and spike-in thresholds, respectively, for different biological systems.
Indeed, even with the same protocol and system, the appropriate threshold can vary from run to run due to the vagaries of cDNA capture efficiency and sequencing depth per cell.
### With adaptive thresholds {#quality-control-outlier}
Here, we assume that most of the dataset consists of high-quality cells.
We then identify cells that are outliers for the various QC metrics, based on the median absolute deviation (MAD) from the median value of each metric across all cells.
By default, we consider a value to be an outlier if it is more than 3 MADs from the median in the "problematic" direction.
This is loosely motivated by the fact that such a filter will retain 99% of non-outlier values that follow a normal distribution.
We demonstrate by using the `perCellQCFilters()` function on the QC metrics from the 416B dataset.
```r
reasons <- perCellQCFilters(df,
sub.fields=c("subsets_Mito_percent", "altexps_ERCC_percent"))
colSums(as.matrix(reasons))
```
```
## low_lib_size low_n_features high_subsets_Mito_percent
## 4 0 2
## high_altexps_ERCC_percent discard
## 1 6
```
This function will identify cells with log-transformed library sizes that are more than 3 MADs below the median.
A log-transformation is used to improve resolution at small values when `type="lower"`
and to avoid negative thresholds that would be meaningless for a non-negative metric.
Furthermore, it is not uncommon for the distribution of library sizes to exhibit a heavy right tail;
the log-transformation avoids inflation of the MAD in a manner that might compromise outlier detection on the left tail.
(More generally, it makes the distribution seem more normal to justify the 99% rationale mentioned above.)
The function will also do the same for the log-transformed number of expressed genes.
`perCellQCFilters()` will also identify outliers for the proportion-based metrics specified in the `sub.fields=` arguments.
These distributions frequently exhibit a heavy right tail, but unlike the two previous metrics, it is the right tail itself that contains the putative low-quality cells.
Thus, we do not perform any transformation to shrink the tail - rather, our hope is that the cells in the tail are identified as large outliers.
(While it is theoretically possible to obtain a meaningless threshold above 100%, this is rare enough to not be of practical concern.)
A cell that is an outlier for any of these metrics is considered to be of low quality and discarded.
This is captured in the `discard` column, which can be used for later filtering (Section \@ref(quality-control-discarded)).
```r
summary(reasons$discard)
```
```
## Mode FALSE TRUE
## logical 186 6
```
We can also extract the exact filter thresholds from the attributes of each of the logical vectors.
This may be useful for checking whether the automatically selected thresholds are appropriate.
```r
attr(reasons$low_lib_size, "thresholds")
```
```
## lower higher
## 434083 Inf
```
```r
attr(reasons$low_n_features, "thresholds")
```
```
## lower higher
## 5231 Inf
```
With this strategy, the thresholds adapt to both the location and spread of the distribution of values for a given metric.
This allows the QC procedure to adjust to changes in sequencing depth, cDNA capture efficiency, mitochondrial content, etc. without requiring any user intervention or prior experience.
However, the underlying assumption of a high-quality majority may not always be appropriate, which is discussed in more detail in [Advanced Section 1.3](http://bioconductor.org/books/3.16/OSCA.advanced/quality-control-redux.html#outlier-assumptions).
### Other approaches
Another strategy is to identify outliers in high-dimensional space based on the QC metrics for each cell.
We use methods from *[robustbase](https://CRAN.R-project.org/package=robustbase)* to quantify the "outlyingness" of each cells based on their QC metrics, and then use `isOutlier()` to identify low-quality cells that exhibit unusually high levels of outlyingness.
```r
stats <- cbind(log10(df$sum), log10(df$detected),
df$subsets_Mito_percent, df$altexps_ERCC_percent)
library(robustbase)
outlying <- adjOutlyingness(stats, only.outlyingness = TRUE)
multi.outlier <- isOutlier(outlying, type = "higher")
summary(multi.outlier)
```
```
## Mode FALSE TRUE
## logical 182 10
```
This and related approaches like PCA-based outlier detection and support vector machines can provide more power to distinguish low-quality cells from high-quality counterparts [@ilicic2016classification] as they can exploit patterns across many QC metrics.
However, this comes at some cost to interpretability, as the reason for removing a given cell may not always be obvious.
For completeness, we note that outliers can also be identified from the gene expression profiles, rather than QC metrics.
We consider this to be a risky strategy as it can remove high-quality cells in rare populations.
## Checking diagnostic plots {#quality-control-plots}
It is good practice to inspect the distributions of QC metrics (Figure \@ref(fig:qc-dist-416b)) to identify possible problems.
In the most ideal case, we would see normal distributions that would justify the 3 MAD threshold used in outlier detection.
A large proportion of cells in another mode suggests that the QC metrics might be correlated with some biological state, potentially leading to the loss of distinct cell types during filtering;
or that there were inconsistencies with library preparation for a subset of cells, a not-uncommon phenomenon in plate-based protocols.
```r
colData(sce.416b) <- cbind(colData(sce.416b), df)
sce.416b$block <- factor(sce.416b$block)
sce.416b$phenotype <- ifelse(grepl("induced", sce.416b$phenotype),
"induced", "wild type")
sce.416b$discard <- reasons$discard
library(scater)
gridExtra::grid.arrange(
plotColData(sce.416b, x="block", y="sum", colour_by="discard",
other_fields="phenotype") + facet_wrap(~phenotype) +
scale_y_log10() + ggtitle("Total count"),
plotColData(sce.416b, x="block", y="detected", colour_by="discard",
other_fields="phenotype") + facet_wrap(~phenotype) +
scale_y_log10() + ggtitle("Detected features"),
plotColData(sce.416b, x="block", y="subsets_Mito_percent",
colour_by="discard", other_fields="phenotype") +
facet_wrap(~phenotype) + ggtitle("Mito percent"),
plotColData(sce.416b, x="block", y="altexps_ERCC_percent",
colour_by="discard", other_fields="phenotype") +
facet_wrap(~phenotype) + ggtitle("ERCC percent"),
ncol=1
)
```
(\#fig:qc-dist-416b)Distribution of QC metrics for each batch and phenotype in the 416B dataset. Each point represents a cell and is colored according to whether it was discarded, respectively.
Another useful diagnostic involves plotting the proportion of mitochondrial counts against some of the other QC metrics.
The aim is to confirm that there are no cells with both large total counts and large mitochondrial counts, to ensure that we are not inadvertently removing high-quality cells that happen to be highly metabolically active (e.g., hepatocytes).
We demonstrate using data from a larger experiment involving the mouse brain [@zeisel2015brain];
in this case, we do not observe any points in the top-right corner in Figure \@ref(fig:qc-mito-zeisel) that might potentially correspond to metabolically active, undamaged cells.
(\#fig:qc-mito-zeisel)Percentage of UMIs assigned to mitochondrial transcripts in the Zeisel brain dataset, plotted against the total number of UMIs (top). Each point represents a cell and is colored according to whether it was considered low-quality and discarded.
Comparison of the ERCC and mitochondrial percentages can also be informative (Figure \@ref(fig:qc-mito-spike-zeisel)).
Low-quality cells with small mitochondrial percentages, large spike-in percentages and small library sizes are likely to be stripped nuclei, i.e., they have been so extensively damaged that they have lost all cytoplasmic content.
On the other hand, cells with high mitochondrial percentages and low ERCC percentages may represent undamaged cells that are metabolically active.
This interpretation also applies for single-nuclei studies but with a switch of focus:
the stripped nuclei become the libraries of interest while the undamaged cells are considered to be low quality.
```r
plotColData(sce.zeisel, x="altexps_ERCC_percent", y="subsets_Mt_percent",
colour_by="discard")
```
(\#fig:qc-mito-spike-zeisel)Percentage of UMIs assigned to mitochondrial transcripts in the Zeisel brain dataset, plotted against the percentage of UMIs assigned to spike-in transcripts (bottom). Each point represents a cell and is colored according to whether it was considered low-quality and discarded.
We see that all of these metrics exhibit weak correlations to each other,
presumably a manifestation of a common underlying effect of cell damage.
The weakness of the correlations motivates the use of several metrics to capture different aspects of technical quality.
Of course, the flipside is that these metrics may also represent different aspects of biology, increasing the risk of inadvertently discarding entire cell types.
## Removing low-quality cells {#quality-control-discarded}
Once low-quality cells have been identified, we can choose to either remove them or mark them.
Removal is the most straightforward option and is achieved by subsetting the `SingleCellExperiment` by column.
In this case, we use the low-quality calls from Section \@ref(quality-control-outlier) to generate a subsetted `SingleCellExperiment` that we would use for downstream analyses.
```r
# Keeping the columns we DON'T want to discard.
filtered <- sce.416b[,!reasons$discard]
```
The other option is to simply mark the low-quality cells as such and retain them in the downstream analysis.
The aim here is to allow clusters of low-quality cells to form, and then to identify and ignore such clusters during interpretation of the results.
This approach avoids discarding cell types that have poor values for the QC metrics, deferring the decision on whether a cluster of such cells represents a genuine biological state.
```r
marked <- sce.416b
marked$discard <- reasons$discard
```
The downside is that it shifts the burden of QC to the manual interpretation of the clusters, which is already a major bottleneck in scRNA-seq data analysis (Chapters \@ref(clustering), \@ref(marker-detection) and \@ref(cell-type-annotation)).
Indeed, if we do not trust the QC metrics, we would have to distinguish between genuine cell types and low-quality cells based only on marker genes, and this is not always easy due to the tendency of the latter to "express" interesting genes (Section \@ref(quality-control-motivation)).
Retention of low-quality cells also compromises the accuracy of the variance modelling, requiring, e.g., use of more PCs to offset the fact that the early PCs are driven by differences between low-quality and other cells.
For routine analyses, we suggest performing removal by default to avoid complications from low-quality cells.
This allows most of the population structure to be characterized with no - or, at least, fewer - concerns about its validity.
Once the initial analysis is done, and if there are any concerns about discarded cell types ([Advanced Section 1.5](http://bioconductor.org/books/3.16/OSCA.advanced/quality-control-redux.html#qc-discard-cell-types)), a more thorough re-analysis can be performed where the low-quality cells are only marked.
This recovers cell types with low RNA content, high mitochondrial proportions, etc. that only need to be interpreted insofar as they "fill the gaps" in the initial analysis.
## Session Info {-}