SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Landfors Mattias) srt2:(2010-2014)"

Search: WFRF:(Landfors Mattias) > (2010-2014)

  • Result 1-7 of 7
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Degerman, Sofie, et al. (author)
  • Immortalization of T-Cells Is Accompanied by Gradual Changes in CpG Methylation Resulting in a Profile Resembling a Subset of T-Cell Leukemias
  • 2014
  • In: Neoplasia. - : Elsevier BV. - 1522-8002 .- 1476-5586. ; 16:7, s. 606-615
  • Journal article (peer-reviewed)abstract
    • We have previously described gene expression changes during spontaneous immortalization of T-cells, thereby identifying cellular processes important for cell growth crisis escape and unlimited proliferation. Here, we analyze the same model to investigate the role of genome-wide methylation in the immortalization process at different time points pre-crisis and post-crisis using high-resolution arrays. We show that over time in culture there is an overall accumulation of methylation alterations, with preferential increased methylation close to transcription start sites (TSSs), islands, and shore regions. Methylation and gene expression alterations did not correlate for the majority of genes, but for the fraction that correlated, gain of methylation close to TSS was associated with decreased gene expression. Interestingly, the pattern of CpG site methylation observed in immortal T-cell cultures was similar to clinical T-cell acute lymphoblastic leukemia (T-ALL) samples classified as CpG island methylator phenotype positive. These sites were highly overrepresented by polycomb target genes and involved in developmental, cell adhesion, and cell signaling processes. The presence of non-random methylation events in in vitro immortalized T-cell cultures and diagnostic T-ALL samples indicates altered methylation of CpG sites with a possible role in malignant hematopoiesis.
  •  
2.
  • Degerman, Sofie, et al. (author)
  • Long Leukocyte Telomere Length at Diagnosis Is a Risk Factor for Dementia Progression in Idiopathic Parkinsonism
  • 2014
  • In: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 9:12
  • Journal article (peer-reviewed)abstract
    • Telomere length (TL) is regarded as a marker of cellular aging due to the gradual shortening by each cell division, but is influenced by a number of factors including oxidative stress and inflammation. Parkinson's disease and atypical forms of parkinsonism occur mainly in the elderly, with oxidative stress and inflammation in afflicted cells. In this study the relationship between blood TL and prognosis of 168 patients with idiopathic parkinsonism (136 Parkinson's disease [PD], 17 Progressive Supranuclear Palsy [PSP], and 15 Multiple System Atrophy [MSA]) and 30 controls was investigated. TL and motor and cognitive performance were assessed at baseline (diagnosis) and repeatedly up to three to five years follow up. No difference in TL between controls and patients was shown at baseline, nor any significant difference in TL stability or attrition during follow up. Interestingly, a significant relationship between TL at diagnosis and cognitive phenotype at follow up in PD and PSP patients was found, with longer mean TL at diagnosis in patients that developed dementia within three years.
  •  
3.
  • Del Peso-Santos, Teresa, et al. (author)
  • Pr is a member of a restricted class of σ70-dependent promoters that lack a recognizable -10 element
  • 2012
  • In: Nucleic Acids Research. - : Oxford University Press (OUP). - 0305-1048 .- 1362-4962. ; 40:22, s. 11308-11320
  • Journal article (peer-reviewed)abstract
    • The Pr promoter is the first verified member of a class of bacterial σ(70)-promoters that only possess a single match to consensus within its -10 element. In its native context, the activity of this promoter determines the ability of Pseudomonas putida CF600 to degrade phenolic compounds, which provides proof-of-principle for the significance of such promoters. Lack of identity within the -10 element leads to non-detection of Pr-like promoters by current search engines, because of their bias for detection of the -10 motif. Here, we report a mutagenesis analysis of Pr that reveals strict sequence requirements for its activity that includes an essential -15 element and preservation of non-consensus bases within its -35 and -10 elements. We found that highly similar promoters control plasmid- and chromosomally- encoded phenol degradative systems in various Pseudomonads. However, using a purpose-designed promoter-search algorithm and activity analysis of potential candidate promoters, no bona fide Pr-like promoter could be found in the entire genome of P. putida KT2440. Hence, Pr-like σ(70)-promoters, which have the potential to be a widely distributed class of previously unrecognized promoters, are in fact highly restricted and remain in a class of their own.
  •  
4.
  • Freyhult, Eva, et al. (author)
  • Challenges in microarray class discovery : a comprehensive examination of normalization, gene selection and clustering
  • 2010
  • In: BMC Bioinformatics. - : BioMed Central. - 1471-2105. ; 11
  • Journal article (peer-reviewed)abstract
    • Background: Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization.Result: We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions.Conclusions: The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data.
  •  
5.
  • Landfors, Mattias, 1979- (author)
  • Normalization and analysis of high-dimensional genomics data
  • 2012
  • Doctoral thesis (other academic/artistic)abstract
    • In the middle of the 1990’s the microarray technology was introduced. The technology allowed for genome wide analysis of gene expression in one experiment. Since its introduction similar high through-put methods have been developed in other fields of molecular biology. These high through-put methods provide measurements for hundred up to millions of variables in a single experiment and a rigorous data analysis is necessary in order to answer the underlying biological questions. Further complications arise in data analysis as technological variation is introduced in the data, due to the complexity of the experimental procedures in these experiments. This technological variation needs to be removed in order to draw relevant biological conclusions from the data. The process of removing the technical variation is referred to as normalization or pre-processing. During the last decade a large number of normalization and data analysis methods have been proposed. In this thesis, data from two types of high through-put methods are used to evaluate the effect pre-processing methods have on further analyzes. In areas where problems in current methods are identified, novel normalization methods are proposed. The evaluations of known and novel methods are performed on simulated data, real data and data from an in-house produced spike-in experiment.
  •  
6.
  • Landfors, Mattias, et al. (author)
  • Normalization of high dimensional genomics data where the distribution of the altered variables is skewed
  • 2011
  • In: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 6:11, s. e27942-
  • Journal article (peer-reviewed)abstract
    • Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods.
  •  
7.
  • Önskog, Jenny, et al. (author)
  • Classification of microarrays : synergistic effects between normalization, gene selection and machine learning
  • 2011
  • In: BMC Bioinformatics. - : BioMed Central. - 1471-2105. ; 12:1
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: Machine learning is a powerful approach for describing and predicting classes in microarray data. Although several comparative studies have investigated the relative performance of various machine learning methods, these often do not account for the fact that performance (e.g. error rate) is a result of a series of analysis steps of which the most important are data normalization, gene selection and machine learning.RESULTS: In this study, we used seven previously published cancer-related microarray data sets to compare the effects on classification performance of five normalization methods, three gene selection methods with 21 different numbers of selected genes and eight machine learning methods. Performance in term of error rate was rigorously estimated by repeatedly employing a double cross validation approach. Since performance varies greatly between data sets, we devised an analysis method that first compares methods within individual data sets and then visualizes the comparisons across data sets. We discovered both well performing individual methods and synergies between different methods.CONCLUSION: Support Vector Machines with a radial basis kernel, linear kernel or polynomial kernel of degree 2 all performed consistently well across data sets. We show that there is a synergistic relationship between these methods and gene selection based on the T-test and the selection of a relatively high number of genes. Also, we find that these methods benefit significantly from using normalized data, although it is hard to draw general conclusions about the relative performance of different normalization procedures.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-7 of 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view