Benchmarking transcriptional host response signatures for infection diagnosis

gene_x 0 like s 21 view s

Tags: bioinformatics

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9768893/

Abstract

  • Identification of host transcriptional response signatures has emerged as a new paradigm for infection diagnosis.
  • For clinical applications, signatures must robustly detect the pathogen of interest without cross-reacting with unintended conditions.
  • To evaluate the performance of infectious disease signatures, we developed a framework that includes a compendium of 17,105 transcriptional profiles capturing infectious and non-infectious conditions and a standardized methodology to assess robustness and cross-reactivity.
  • Applied to 30 published signatures of infection, the analysis showed that signatures were generally robust in detecting viral and bacterial infections in independent data.
  • Asymptomatic and chronic infections were also detectable, albeit with decreased performance.
  • However, many signatures were cross-reactive with unintended infections and aging.
  • In general, we found robustness and cross-reactivity to be conflicting objectives, and we identified signature properties associated with this trade-off.
  • The data compendium and evaluation framework developed here provide a foundation for the development of signatures for clinical application.
  • A record of this paper’s transparent peer review process is included in the supplemental information.

Introduction

  • The ability to diagnose infectious diseases has a profound impact on global health.
  • Most recently, diagnostic testing for SARS-CoV-2 infection has helped contain the COVID-19 pandemic, lessening the strain on healthcare systems.
  • As a further example, diagnostic technologies that discriminate bacterial from viral infections can inform the prescription of antibiotics.
  • This is a high-stakes clinical decision: if prescribed for bacterial infections, the use of antibiotics substantially reduces mortality,1 but if prescribed for viral infections, their misuse exacerbates antimicrobial resistance.2
  • Standard tests for infection diagnosis involve a variety of technologies including microbial cultures, PCR assays, and antigen-binding assays.
  • Despite the diversity in technologies, standard tests generally share a common design principle, which is to directly quantify pathogen material in patient samples.
  • As a consequence, standard tests have poor detection, particularly early after infection, before the pathogen replicates to detectable levels.
  • For example, PCR-based tests for SARS-CoV-2 infection may miss 60%–100% of infections within the first few days of infection due to insufficient viral genetic material.3 , 4
  • Similarly, a study of community acquired pneumonia found that pathogen-based tests failed to identify the causative pathogen in over 60% of patients.5
  • To overcome these limitations, new tools for infection diagnosis are urgently needed.

  • Host transcriptional response assays have emerged as a new paradigm to diagnose infections.6 , 7 , 8 , 9 , 10

  • Research in the field has produced a variety of host response signatures to detect general viral or bacterial infections as well as signatures for specific pathogens such as influenza virus.6 , 11 , 12 , 13 , 14 , 15
  • Unlike standard tests that measure pathogen material, these assays monitor changes in gene expression in response to infection.16
  • For example, transcriptional upregulation of IFN response genes may indicate an ongoing viral infection because these genes take part in the host antiviral response.17
  • Host response assays have a major potential advantage over pathogen-based tests because they may detect an infection even when the pathogen material is undetectable through direct measurements.

  • Development of 'host response assays' that can be implemented clinically poses new methodological problems.

  • The most challenging problem is identifying the so-called “infection signature” for a pathogen of interest, that is, a set of host transcriptional changes induced in response to that pathogen.
  • Signature performance is characterized along two axes, robustness and cross-reactivity.
  • Robustness is defined as the ability of a signature to detect the intended infectious condition consistently in multiple independent cohorts.
  • Cross-reactivity is defined as the extent to which a signature predicts any condition other than the intended one.
  • To be clinically viable, an infection signature must simultaneously demonstrate high robustness and low cross-reactivity.
  • A robust signature that does not demonstrate low cross-reactivity would detect unintended conditions, such as other infections (e.g., viral signatures detecting bacterial infections) and/or non-infectious conditions involving abnormal immune activation.

  • The clinical applicability of host response signatures ultimately depends on a rigorous evaluation of their robustness and cross-reactivity properties.

  • Such an evaluation is a complex task because it requires integrating and analyzing massive amounts of transcriptional studies involving the pathogen of interest along with a wide variety of other infectious and non-infectious conditions that may cause cross-reactivity.
  • Despite recent progress in this direction,10 , 18 , 19 a general framework to benchmark both robustness and cross-reactivity of candidate signatures is still lacking.

  • Here, we establish a general framework for systematic quantification of robustness and cross-reactivity of a candidate signature, based on a fine-grained curation of massive public data and development of a standardized signature scoring method.

  • Using this framework, we demonstrated that published signatures are generally robust but substantially cross-reactive with infectious and non-infectious conditions.
  • Further analysis of 200,000 synthetic signatures identified an inherent trade-off between robustness and cross-reactivity and determined signature properties associated with this trade-off.
  • Our framework, freely accessible at https://kleinsteinlab.shinyapps.io/compendium_shiny_app/, lays the foundation for the discovery of signatures of infection for clinical application.

Result 1: A curated set of human transcriptional infection signatures

  • While many transcriptional host response signatures of infection have been published, their robustness and cross-reactivity properties have not been systematically evaluated.
  • To identify published signatures for inclusion in our systematic evaluation, we performed a search of NCBI PubMed for publications describing immune profiling of viral or bacterial infections (Figure 1 A).
  • We initially focused our curation on general viral or bacterial (rather than pathogen-specific) signatures from human whole-blood or peripheral-blood mononuclear cells (PBMCs).
  • We identified 24 signatures that were derived using a wide range of computational approaches, including differential expression analyses,7 , 20 , 21 , 22 gene clustering,23 , 24 regularized logistic regression,19 , 20 , 25 and meta-analyses.8 , 11

  • The signatures were annotated with multiple characteristics that were needed for the evaluation of performance.

  • The most important characteristic was the intended use of the signatures.
  • The intended use of the included signatures was to detect viral infection (V), bacterial infection (B), or directly discriminate between viral and bacterial infections (V/B).
  • For each signature, we recorded a set of genes and a group I versus group II comparison capturing the design of the signature, where group I was the intended infection type, and group II was a control group. For most viral and bacterial signatures, group II was comprised healthy controls; in a few cases, it was comprised of non-infectious illness controls. For signatures distinguishing viral and bacterial infections (V/B), we conventionally took the bacterial infection group as the control group.

  • We parsed the genes in these signatures as either “positive” or “negative” based on whether they were upregulated or downregulated in the intended group, respectively.

  • We also manually annotated the PubMed identifiers for the publication in which the signature was reported, accession records to identify discovery datasets used to build each signature, association of the signature with either acute or chronic infection, and additional metadata related to demographics and experimental design (Table S1).

  • This curation process identified 11 viral (V) signatures intended to capture transcriptional responses that are common across many viral pathogens, 7 bacterial (B) signatures intended to capture transcriptional responses common across bacterial pathogens, and 6 viral versus bacterial (V/B) signatures discriminating between viral and bacterial infections.

  • Viral signatures varied in size between 3 and 396 genes.
  • Several genes appeared in multiple viral signatures. For example, OASL, an interferon-induced gene with antiviral function,26 appeared in 6 of 11 signatures.
  • Enrichment analysis on the pool of viral signature genes showed significantly enriched terms consistent with antiviral immunity, including response to type I interferon (Figure 1B).
  • Bacterial signatures ranged in size from 2 to 69 genes, and enrichment analysis again highlighted expected pathways associated with antibacterial immunity (Figure 1C).
  • V/B signatures varied in size from 2 to 69 genes. The most common genes among V/B signatures were OASL and IFI27, both of which were also highly represented viral signature genes, and many of the same antiviral pathways were significantly enriched among V/B signature genes (Figure 1D).
  • We further investigated the similarity between viral, bacterial, and V/B signatures and found that many viral signatures shared genes with each other and V/B signatures, but bacterial signatures shared fewer similarities with each other (Figure 1E).
  • Overall, our curation produced a structured and well-annotated set of transcriptional signatures for systematic evaluation.

Result 2: A compendium of human transcriptional infection datasets

  • To profile the performance of the curated infection signatures, we compiled a large compendium of datasets capturing host blood transcriptional responses to a wide diversity of pathogens.
  • We carried out a comprehensive search in the NCBI Gene Expression Omnibus (GEO)27 selecting transcriptional responses to in vivo viral, bacterial, parasitic, and fungal infections in human whole blood or PBMCs.
  • We screened over 8,000 GEO records and identified 136 transcriptional datasets that met our inclusion criteria (see STAR Methods).
  • Furthermore, to evaluate whether infection signatures cross-react with non-infectious conditions with documented immunomodulating effects, we also compiled an additional 14 datasets containing transcriptomes from the blood of aged and obese individuals.28 , 29
  • All datasets were downloaded from GEO and passed through a standardized pipeline.
  • Briefly, the pipeline included (1) uniform pre-processing of raw data files where possible, (2) remapping of available gene identifiers to Entrez Gene IDs, and (3) detection of outlier samples.30
  • In aggregate, we compiled, processed, and annotated 150 datasets to include in our data compendium (Figure 2 A; Table S2; see STAR Methods for details).

  • The compendium datasets showed wide variability in study design, sample composition, and available metadata necessitating annotation both at the study level and at the finer-grained sample level.

  • Datasets followed either cross-sectional study designs, where individual subjects were profiled once for a snapshot of their infection, or longitudinal study designs, in which individual subjects were profiled at multiple time points over the course of an infection.
  • For longitudinal datasets, we also recorded subject identifiers and labeled time points.
  • Many datasets contained multiple subgroups, each profiling infection with a different pathogen.
  • Detailed review of the clinical methods and metadata for each study enabled us to annotate individual samples with infectious class (e.g., viral or bacterial) and causative pathogen.
  • For clinical variables, we manually recorded whether datasets profiled acute or chronic infections according to the authors and annotated symptom severity when available.
  • We further supplemented this information with biological sex, which we inferred computationally (see STAR Methods).
  • In total, we annotated 16,173 infection and control samples in a consistent way, capturing host responses to viral, bacterial, and parasitic infections.
  • We similarly annotated the additional 932 samples from aging and obesity datasets including young and lean controls, respectively. In aggregate we captured a broad range of more than 35 unique pathogens and non-infectious conditions (Figure 2B).

  • Most of our compendium datasets were composed of viral and bacterial infection response profiles.

  • We examined several technical factors that may bias the signature performance evaluation between these pathogen categories.
  • Datasets profiling viral infections and datasets profiling bacterial infections contained similar numbers of samples, with median samples sizes of 75.5 and 63.0, respectively, though the largest viral studies contained more samples than the largest bacterial studies (Figure 2C).
  • The number of cross-sectional studies was also nearly identical for both viral and bacterial infection datasets, but our compendium contained 20 viral longitudinal datasets (35% of viral) compared with 6 bacterial longitudinal datasets (10% of bacterial) (Figure 2D).
  • We also examined the distribution of platforms used to generate viral and bacterial infection datasets and found that gene expression was measured most commonly using Illumina platforms, followed by Affymetrix, for both viral and bacterial datasets (Figure 2E).
  • We also examined the frequency of whole blood and PBMC samples in our compendium (Figure 2F).
  • We did not identify systematic differences in the viral and bacterial datasets within our compendium, and therefore we do not expect these differences to impact the interpretation of our signature evaluations.

Result 3: Establishing a general framework for signature evaluation

  • We sought to quantify two measures of performance for all curated signatures: (1) robustness, the ability of a signature to predict its target infection in independent datasets not used for signature discovery, and (2) cross-reactivity, which we quantify as the undesired extent to which a signature predicts unrelated infections or conditions.
  • An ideal signature would demonstrate robustness but not cross-reactivity, e.g., an ideal viral signature would predict viral infections in independent datasets but would not be associated with infections caused by pathogens such as bacteria or parasites.

  • To score each signature in a standardized way, we leveraged the geometric mean scoring approach described in Haynes et al.31

  • For each signature (i.e., a set of positive genes and an optional set of negative genes), we calculated its sample score from log-transformed expression values by taking the difference between the geometric mean of positive signature gene expression values and the geometric mean of negative signature gene expression values.
  • For cross-sectional study designs, this generates a single signature score for each subject, but for longitudinal study designs, this approach produces a vector of scores across time points for each subject (refers to a dataset from GEO, for example virus datasets GSE117827, or bacteria dataset GSE128557, or parasite dataset GSE122737).
  • The scores at different time points can vary dramatically as the transcriptional program underlying an immune response changes over the course of an infection.11 , 16 , 32
  • In this case, we chose the maximally discriminative time point, so that a signature is considered robust if it can detect the infection at any time point but also considered cross-reactive if it would produce a false-positive call at any time point (see STAR Methods).
  • These subject scores were then used to quantify signature performance as the area under a receiver operator characteristic curve (AUROC) associated with each group comparison.
  • This approach is advantageous because it is computationally efficient and model-free.
  • The model-free property presents an advantage over parameterized models because it does not require transferring or re-training model coefficients between datasets.
  • Overall, this framework enables us to evaluate the performance of all signatures in a standardized and consistent way in any dataset (Figure 3 A).

  • signature --> transcriptiomics --> subject score (subject refers to a patient or a healthy control in a dataset in a GEO dataset) --> metric --> evaluation (robustness and cross-reactivity)

TODO

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6716367/#R67 Benchmarking Metagenomics Tools for Taxonomic Classification

DNA classifiers

  • Bracken
  • Centrifuge
  • CLARK
  • CLARK-S
  • Kraken
  • Kraken2
  • KrakenUniq
  • k-SLAM
  • MegaBLAST
  • metaOthello
  • PathSeq
  • prophyle
  • taxMaps

Protein classifiers

  • DIAMOND
  • Kaiju
  • MMseqs2

Markers classifiers

  • MetaPhlAn2
  • mOTUs2

like unlike

点赞本文的读者

还没有人对此文章表态


本文有评论

没有评论

看文章,发评论,不要沉默


© 2023 XGenes.com Impressum