gene_x 0 like s 117 view s
Tags: bioinformatics
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9768893/
Abstract
Introduction
To overcome these limitations, new tools for infection diagnosis are urgently needed.
Host transcriptional response assays have emerged as a new paradigm to diagnose infections.6 , 7 , 8 , 9 , 10
Host response assays have a major potential advantage over pathogen-based tests because they may detect an infection even when the pathogen material is undetectable through direct measurements.
Development of 'host response assays' that can be implemented clinically poses new methodological problems.
A robust signature that does not demonstrate low cross-reactivity would detect unintended conditions, such as other infections (e.g., viral signatures detecting bacterial infections) and/or non-infectious conditions involving abnormal immune activation.
The clinical applicability of host response signatures ultimately depends on a rigorous evaluation of their robustness and cross-reactivity properties.
Despite recent progress in this direction,10 , 18 , 19 a general framework to benchmark both robustness and cross-reactivity of candidate signatures is still lacking.
Here, we establish a general framework for systematic quantification of robustness and cross-reactivity of a candidate signature, based on a fine-grained curation of massive public data and development of a standardized signature scoring method.
Result 1: A curated set of human transcriptional infection signatures
We identified 24 signatures that were derived using a wide range of computational approaches, including differential expression analyses,7 , 20 , 21 , 22 gene clustering,23 , 24 regularized logistic regression,19 , 20 , 25 and meta-analyses.8 , 11
The signatures were annotated with multiple characteristics that were needed for the evaluation of performance.
For each signature, we recorded a set of genes and a group I versus group II comparison capturing the design of the signature, where group I was the intended infection type, and group II was a control group. For most viral and bacterial signatures, group II was comprised healthy controls; in a few cases, it was comprised of non-infectious illness controls. For signatures distinguishing viral and bacterial infections (V/B), we conventionally took the bacterial infection group as the control group.
We parsed the genes in these signatures as either “positive” or “negative” based on whether they were upregulated or downregulated in the intended group, respectively.
We also manually annotated the PubMed identifiers for the publication in which the signature was reported, accession records to identify discovery datasets used to build each signature, association of the signature with either acute or chronic infection, and additional metadata related to demographics and experimental design (Table S1).
This curation process identified 11 viral (V) signatures intended to capture transcriptional responses that are common across many viral pathogens, 7 bacterial (B) signatures intended to capture transcriptional responses common across bacterial pathogens, and 6 viral versus bacterial (V/B) signatures discriminating between viral and bacterial infections.
Result 2: A compendium of human transcriptional infection datasets
In aggregate, we compiled, processed, and annotated 150 datasets to include in our data compendium (Figure 2 A; Table S2; see STAR Methods for details).
The compendium datasets showed wide variability in study design, sample composition, and available metadata necessitating annotation both at the study level and at the finer-grained sample level.
We similarly annotated the additional 932 samples from aging and obesity datasets including young and lean controls, respectively. In aggregate we captured a broad range of more than 35 unique pathogens and non-infectious conditions (Figure 2B).
Most of our compendium datasets were composed of viral and bacterial infection response profiles.
Result 3: Establishing a general framework for signature evaluation
An ideal signature would demonstrate robustness but not cross-reactivity, e.g., an ideal viral signature would predict viral infections in independent datasets but would not be associated with infections caused by pathogens such as bacteria or parasites.
To score each signature in a standardized way, we leveraged the geometric mean scoring approach described in Haynes et al.31
Overall, this framework enables us to evaluate the performance of all signatures in a standardized and consistent way in any dataset (Figure 3 A).
signature --> transcriptiomics --> subject score (subject refers to a patient or a healthy control in a dataset in a GEO dataset) --> metric --> evaluation (robustness and cross-reactivity)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6716367/#R67 Benchmarking Metagenomics Tools for Taxonomic Classification
DNA classifiers
Protein classifiers
Markers classifiers
点赞本文的读者
还没有人对此文章表态
没有评论
Standard Bioinformatics Service and Custom Bioinformatics Solutions
© 2023 XGenes.com Impressum