SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Trygg Johan) ;pers:(Eliasson Mattias)"

Search: WFRF:(Trygg Johan) > Eliasson Mattias

  • Result 1-6 of 6
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Eliasson, Mattias, et al. (author)
  • From data processing to multivariate validation : essential steps in extracting interpretable information from metabolomics data
  • 2011
  • In: Current Pharmaceutical Biotechnology. - : Bentham Science Publishers. - 1389-2010 .- 1873-4316. ; 12:7, s. 996-1004
  • Research review (peer-reviewed)abstract
    • In metabolomics studies there is a clear increase of data. This indicates the necessity of both having a battery of suitable analysis methods and validation procedures able to handle large amounts of data. In this review, an overview of the metabolomics data processing pipeline is presented. A selection of recently developed and most cited data processing methods is discussed. In addition, commonly used chemometric and machine learning analysis methods as well as validation approaches are described.
  •  
2.
  •  
3.
  • Eliasson, Mattias, et al. (author)
  • Strategy for optimizing LC-MS data processing in Metabolomics : A design of experiments approach
  • 2012
  • In: Analytical Chemistry. - : American Chemical Society (ACS). - 0003-2700 .- 1520-6882. ; 84:15, s. 6869-6876
  • Journal article (peer-reviewed)abstract
    • A strategy for optimizing LC-MS metabolomics data processing is proposed. We applied this strategy on the XCMS open source package written in R on both human and plant biology data. The strategy is a sequential design of experiments (DoE) based on a dilution series from a pooled sample and a measure of correlation between diluted concentrations and integrated peak areas. The reliability index metric, used to define peak quality, simultaneously favors reliable peaks and disfavors unreliable peaks using a weighted ratio between peaks with high and low response linearity. DoE optimization resulted in the case studies in more than 57% improvement in the reliability index compared to the use of the default settings. The proposed strategy can be applied to any other data processing software involving parameters to be tuned, e.g., MZmine 2. It can also be fully automated and used as a module in a complete metabolomics data processing pipeline.
  •  
4.
  • Gerber, Lorenz, et al. (author)
  • Multivariate curve resolution provides a high-throughput data processing pipeline for pyrolysis-gas chromatography/mass spectrometry
  • 2012
  • In: Journal of Analytical and Applied Pyrolysis. - : Elsevier BV. - 0165-2370 .- 1873-250X. ; 95, s. 95-100
  • Journal article (peer-reviewed)abstract
    • We present a data processing pipeline for Pyrolysis-Gas Chromatography/Mass Spectrometry (Py-GC/MS) data that is suitable for high-throughput analysis of lignocellulosic samples. The aproach applies multivariate curve resolution by alternate regression (MCR-AR) and automated peak assignment. MCR-AR employs parallel processing of multiple chromatograms, as opposed to sequential processing used in prevailing applications. Parallel processing provides a global peak list that is consistent for all chromatograms, and therefore does not require tedious manual curation. We evaluated this approach on wood samples from aspen and Norway spruce, and found that parallel processing results in an overall higher precision of peak area from integrated peaks. To further increase the speed of data processing we evaluated automated peak assignment solely based on basepeak mass. This approach gave estimates of the proportion of lignin (as syringyl-, guaiacyl and p-hydroxyphenyl-type lignin) and carbohydrate polymers in the wood samples that were in high agreement with those where peak assignments were based on full spectra. This method establishes Py-GC/MS as a sensitive, robust and versatile high-throughput screening platform well suited to a non-specialist operator.
  •  
5.
  • Pinto, Rui Climaco, 1972-, et al. (author)
  • Strategy for minimizing between-study variation of large-scale phenotypic experiments using multivariate analysis
  • 2012
  • In: Analytical Chemistry. - : American Chemical Society (ACS). - 0003-2700 .- 1520-6882. ; 84:20, s. 8675-8681
  • Journal article (peer-reviewed)abstract
    • We have developed a multistep strategy that integrates data from several large-scale experiments that suffer from systematic between-experiment variation. This strategy removes such variation that would otherwise mask differences of interest. It was applied to the evaluation of wood chemical analysis of 736 hybrid aspen trees: wild-type controls and transgenic trees potentially involved in wood formation. The trees were grown in four different greenhouse experiments imposing significant variation between experiments. Pyrolysis coupled to gas chromatography/mass spectrometry (Py-GC/MS) was used as a high throughput-screening platform for fingerprinting of wood chemotype. Our proposed strategy includes quality control, outlier detection, gene specific classification, and consensus analysis. The orthogonal projections to latent structures discriminant analysis (OPLS-DA) method was used to generate the consensus chemotype profiles for each transgenic line. These were thereafter compiled to generate a global dataset. Multivariate analysis and cluster analysis techniques revealed a drastic reduction in between-experiment variation that enabled a global analysis of all transgenic lines from the four independent experiments. Information from in-depth analysis of specific transgenic lines and independent peak identification validated our proposed strategy.
  •  
6.
  • Trygg, Johan, et al. (author)
  • Optimization of Data Processing Parameters
  • 2012
  • Patent (pop. science, debate, etc.)abstract
    • Described are computer-based methods and apparatuses, including computer program products, for optimizing data processing parameters. A data set is received that represents a plurality of samples. The data set is processed using a data processing algorithm that includes one or more processing stages, each stage using a respective first set of data processing parameters to generate processed data. A design of experiment model is generated for the data processing algorithm based on the processed data and a set of response values. For each stage of the data processing algorithm, a second set of data processing parameters is calculated based on at least the design of experiment model.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-6 of 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view