SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Yoon TH) srt2:(2020)"

Sökning: WFRF:(Yoon TH) > (2020)

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  • Federico, A, et al. (författare)
  • Transcriptomics in Toxicogenomics, Part II: Preprocessing and Differential Expression Analysis for High Quality Data
  • 2020
  • Ingår i: Nanomaterials (Basel, Switzerland). - : MDPI AG. - 2079-4991. ; 10:5
  • Tidskriftsartikel (refereegranskat)abstract
    • Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.
  •  
4.
  • James, SL, et al. (författare)
  • Estimating global injuries morbidity and mortality: methods and data used in the Global Burden of Disease 2017 study
  • 2020
  • Ingår i: Injury prevention : journal of the International Society for Child and Adolescent Injury Prevention. - : BMJ. - 1475-5785. ; 26:SUPP_1Supp 1, s. 125-153
  • Tidskriftsartikel (refereegranskat)abstract
    • While there is a long history of measuring death and disability from injuries, modern research methods must account for the wide spectrum of disability that can occur in an injury, and must provide estimates with sufficient demographic, geographical and temporal detail to be useful for policy makers. The Global Burden of Disease (GBD) 2017 study used methods to provide highly detailed estimates of global injury burden that meet these criteria.MethodsIn this study, we report and discuss the methods used in GBD 2017 for injury morbidity and mortality burden estimation. In summary, these methods included estimating cause-specific mortality for every cause of injury, and then estimating incidence for every cause of injury. Non-fatal disability for each cause is then calculated based on the probabilities of suffering from different types of bodily injury experienced.ResultsGBD 2017 produced morbidity and mortality estimates for 38 causes of injury. Estimates were produced in terms of incidence, prevalence, years lived with disability, cause-specific mortality, years of life lost and disability-adjusted life-years for a 28-year period for 22 age groups, 195 countries and both sexes.ConclusionsGBD 2017 demonstrated a complex and sophisticated series of analytical steps using the largest known database of morbidity and mortality data on injuries. GBD 2017 results should be used to help inform injury prevention policy making and resource allocation. We also identify important avenues for improving injury burden estimation in the future.
  •  
5.
  • Kinaret, PAS, et al. (författare)
  • Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects
  • 2020
  • Ingår i: Nanomaterials (Basel, Switzerland). - : MDPI AG. - 2079-4991. ; 10:4
  • Tidskriftsartikel (refereegranskat)abstract
    • The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series.
  •  
6.
  • Serra, A, et al. (författare)
  • Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment
  • 2020
  • Ingår i: Nanomaterials (Basel, Switzerland). - : MDPI AG. - 2079-4991. ; 10:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.
  •  
7.
  •  
8.
  •  
9.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy