SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Jörnsten Rebecka 1971) "

Search: WFRF:(Jörnsten Rebecka 1971)

  • Result 1-10 of 39
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Abel, Frida, 1974, et al. (author)
  • A 6-gene signature identifies four molecular subgroups of neuroblastoma
  • 2011
  • In: Cancer Cell International. - : Springer Science and Business Media LLC. - 1475-2867. ; 11:9
  • Journal article (peer-reviewed)abstract
    • Abstract Background There are currently three postulated genomic subtypes of the childhood tumour neuroblastoma (NB); Type 1, Type 2A, and Type 2B. The most aggressive forms of NB are characterized by amplification of the oncogene MYCN (MNA) and low expression of the favourable marker NTRK1. Recently, mutations or high expression of the familial predisposition gene Anaplastic Lymphoma Kinase (ALK) was associated to unfavourable biology of sporadic NB. Also, various other genes have been linked to NB pathogenesis. Results The present study explores subgroup discrimination by gene expression profiling using three published microarray studies on NB (47 samples). Four distinct clusters were identified by Principal Components Analysis (PCA) in two separate data sets, which could be verified by an unsupervised hierarchical clustering in a third independent data set (101 NB samples) using a set of 74 discriminative genes. The expression signature of six NB-associated genes ALK, BIRC5, CCND1, MYCN, NTRK1, and PHOX2B, significantly discriminated the four clusters (p
  •  
2.
  • Abenius, Tobias, 1979, et al. (author)
  • System-scale network modeling of cancer using EPoC
  • 2012
  • In: Advances in Experimental Medicine and Biology. - New York, NY : Springer New York. - 0065-2598. - 9781441972095 ; 736:5, s. 617-643
  • Journal article (peer-reviewed)abstract
    • One of the central problems of cancer systems biology is to understand the complex molecular changes of cancerous cells and tissues, and use this understanding to support the development of new targeted therapies. EPoC (Endogenous Perturbation analysis of Cancer) is a network modeling technique for tumor molecular profiles. EPoC models are constructed from combined copy number aberration (CNA) and mRNA data and aim to (1) identify genes whose copy number aberrations significantly affect target mRNA expression and (2) generate markers for long- and short-term survival of cancer patients. Models are constructed by a combination of regression and bootstrapping methods. Prognostic scores are obtained from a singular value decomposition of the networks. We have previously analyzed the performance of EPoC using glioblastoma data from The Cancer Genome Atlas (TCGA) consortium, and have shown that resulting network models contain both known and candidate disease-relevant genes as network hubs, as well as uncover predictors of patient survival. Here, we give a practical guide how to perform EPoC modeling in practice using R, and present a set of alternative modeling frameworks.
  •  
3.
  • Alevronta, Eleftheria, et al. (author)
  • Dose-response relationships of intestinal organs and excessive mucus discharge after gynaecological radiotherapy
  • 2021
  • In: PLoS ONE. - : Public Library of Science (PLoS). - 1932-6203 .- 1932-6203. ; 16:4 April
  • Journal article (peer-reviewed)abstract
    • Background The study aims to determine possible dose-volume response relationships between the rectum, sigmoid colon and small intestine and the ‘excessive mucus discharge’ syndrome after pelvic radiotherapy for gynaecological cancer. Methods and materials From a larger cohort, 98 gynaecological cancer survivors were included in this study. These survivors, who were followed for 2 to 14 years, received external beam radiation therapy but not brachytherapy and not did not have stoma. Thirteen of the 98 developed excessive mucus discharge syndrome. Three self-assessed symptoms were weighted together to produce a score interpreted as ‘excessive mucus discharge’ syndrome based on the factor loadings from factor analysis. The dose-volume histograms (DVHs) for rectum, sigmoid colon, small intestine for each survivor were exported from the treatment planning systems. The dose-volume response relationships for excessive mucus discharge and each organ at risk were estimated by fitting the data to the Probit, RS, LKB and gEUD models. Results The small intestine was found to have steep dose-response curves, having estimated dose-response parameters: γ : 1.28, 1.23, 1.32, D : 61.6, 63.1, 60.2 for Probit, RS and LKB respectively. The sigmoid colon (AUC: 0.68) and the small intestine (AUC: 0.65) had the highest AUC values. For the small intestine, the DVHs for survivors with and without excessive mucus discharge were well separated for low to intermediate doses; this was not true for the sigmoid colon. Based on all results, we interpret the results for the small intestine to reflect a relevant link. Conclusion An association was found between the mean dose to the small intestine and the occurrence of ‘excessive mucus discharge’. When trying to reduce and even eliminate the incidence of ‘excessive mucus discharge’, it would be useful and important to separately delineate the small intestine and implement the dose-response estimations reported in the study.
  •  
4.
  • Allerbo, Oskar, 1985, et al. (author)
  • Elastic Gradient Descent, an Iterative Optimization Method Approximating the Solution Paths of the Elastic Net
  • 2023
  • In: Journal of Machine Learning Research. - 1533-7928 .- 1532-4435. ; 24, s. 1-35
  • Journal article (peer-reviewed)abstract
    • The elastic net combines lasso and ridge regression to fuse the sparsity property of lasso with the grouping property of ridge regression. The connections between ridge regression and gradient descent and between lasso and forward stagewise regression have previously been shown. Similar to how the elastic net generalizes lasso and ridge regression, we introduce elastic gradient descent, a generalization of gradient descent and forward stagewise regression. We theoretically analyze elastic gradient descent and compare it to the elastic net and forward stagewise regression. Parts of the analysis are based on elastic gradient flow, a piecewise analytical construction, obtained for elastic gradient descent with infinitesimal step size. We also compare elastic gradient descent to the elastic net on real and simulated data and show that it provides similar solution paths, but is several orders of magnitude faster. Compared to forward stagewise regression, elastic gradient descent selects a model that, although still sparse, provides considerably lower prediction and estimation errors.
  •  
5.
  • Allerbo, Oskar, 1985, et al. (author)
  • Flexible, non-parametric modeling using regularized neural networks
  • 2022
  • In: Computational Statistics. - : Springer Science and Business Media LLC. - 0943-4062 .- 1613-9658. ; 37:4, s. 2029-2047
  • Journal article (peer-reviewed)abstract
    • Non-parametric, additive models are able to capture complex data dependencies in a flexible, yet interpretable way. However, choosing the format of the additive components often requires non-trivial data exploration. Here, as an alternative, we propose PrAda-net, a one-hidden-layer neural network, trained with proximal gradient descent and adaptive lasso. PrAda-net automatically adjusts the size and architecture of the neural network to reflect the complexity and structure of the data. The compact network obtained by PrAda-net can be translated to additive model components, making it suitable for non-parametric statistical modelling with automatic model selection. We demonstrate PrAda-net on simulated data, where we compare the test error performance, variable importance and variable subset identification properties of PrAda-net to other lasso-based regularization approaches for neural networks. We also apply PrAda-net to the massive U.K. black smoke data set, to demonstrate how PrAda-net can be used to model complex and heterogeneous data with spatial and temporal components. In contrast to classical, statistical non-parametric approaches, PrAda-net requires no preliminary modeling to select the functional forms of the additive components, yet still results in an interpretable model representation. © 2021, The Author(s).
  •  
6.
  • Allerbo, Oskar, 1985, et al. (author)
  • Non-linear, sparse dimensionality reduction via path lasso penalized autoencoders
  • 2021
  • In: Journal of Machine Learning Research. - : Microtome Publishing. - 1532-4435 .- 1533-7928. ; 22
  • Journal article (peer-reviewed)abstract
    • High-dimensional data sets are often analyzed and explored via the construction of a latent low-dimensional space which enables convenient visualization and efficient predictive modeling or clustering. For complex data structures, linear dimensionality reduction techniques like PCA may not be sufficiently flexible to enable low-dimensional representation. Non-linear dimension reduction techniques, like kernel PCA and autoencoders, suffer from loss of interpretability since each latent variable is dependent of all input dimensions. To address this limitation, we here present path lasso penalized autoencoders. This structured regularization enhances interpretability by penalizing each path through the encoder from an input to a latent variable, thus restricting how many input variables are represented in each latent dimension. Our algorithm uses a group lasso penalty and non-negative matrix factorization to construct a sparse, non-linear latent representation. We compare the path lasso regularized autoencoder to PCA, sparse PCA, autoencoders and sparse autoencoders on real and simulated data sets. We show that the algorithm exhibits much lower reconstruction errors than sparse PCA and parameter-wise lasso regularized autoencoders for low-dimensional representations. Moreover, path lasso representations provide a more accurate reconstruction match, i.e. preserved relative distance between objects in the original and reconstructed spaces. ©2021 Oskar Allerbo and Rebecka Jörnsten.
  •  
7.
  • Almstedt, Elin, 1988-, et al. (author)
  • Integrative discovery of treatments for high-risk neuroblastoma
  • 2020
  • In: Nature Communications. - : Springer Science and Business Media LLC. - 2041-1723 .- 2041-1723. ; 11:1
  • Journal article (peer-reviewed)abstract
    • Despite advances in the molecular exploration of paediatric cancers, approximately 50% of children with high-risk neuroblastoma lack effective treatment. To identify therapeutic options for this group of high-risk patients, we combine predictive data mining with experimental evaluation in patient-derived xenograft cells. Our proposed algorithm, TargetTranslator, integrates data from tumour biobanks, pharmacological databases, and cellular networks to predict how targeted interventions affect mRNA signatures associated with high patient risk or disease processes. We find more than 80 targets to be associated with neuroblastoma risk and differentiation signatures. Selected targets are evaluated in cell lines derived from high-risk patients to demonstrate reversal of risk signatures and malignant phenotypes. Using neuroblastoma xenograft models, we establish CNR2 and MAPK8 as promising candidates for the treatment of high-risk neuroblastoma. We expect that our method, available as a public tool (targettranslator.org), will enhance and expedite the discovery of risk-associated targets for paediatric and adult cancers.
  •  
8.
  •  
9.
  • Andersson, Viktor, 1995, et al. (author)
  • Controlled Decent Training
  • 2023
  • Journal article (other academic/artistic)abstract
    • In this work, a novel and model-based artificial neural network (ANN) training method is developed supported by optimal control theory. The method augments training labels in order to robustly guarantee training loss convergence and improve training convergence rate. Dynamic label augmentation is proposed within the framework of gradient descent training where the convergence of training loss is controlled. First, we capture the training behavior with the help of empirical Neural Tangent Kernels (NTK) and borrow tools from systems and control theory to analyze both the local and global training dynamics (e.g. stability, reachability). Second, we propose to dynamically alter the gradient descent training mechanism via fictitious labels as control inputs and an optimal state feedback policy. In this way, we enforce locally H2 optimal and convergent training behavior. The novel algorithm, Controlled Descent Training (CDT), guarantees local convergence. CDT unleashes new potentials in the analysis, interpretation, and design of ANN architectures. The applicability of the method is demonstrated on standard regression and classification problems.
  •  
10.
  • Andersson, Viktor, 1995, et al. (author)
  • Controlled Descent Training
  • 2024
  • In: International Journal of Robust and Nonlinear Control. - 1099-1239 .- 1049-8923.
  • Journal article (peer-reviewed)abstract
    • In this work, a novel and model-based artificial neural network (ANN) training method is developed supported by optimal control theory. The method augments training labels in order to robustly guarantee training loss convergence and improve training convergence rate. Dynamic label augmentation is proposed within the framework of gradient descent training where the convergence of training loss is controlled. First, we capture the training behavior with the help of empirical Neural Tangent Kernels (NTK) and borrow tools from systems and control theory to analyze both the local and global training dynamics (e.g. stability, reachability). Second, we propose to dynamically alter the gradient descent training mechanism via fictitious labels as control inputs and an optimal state feedback policy. In this way, we enforce locally H2 optimal and convergent training behavior. The novel algorithm, Controlled Descent Training (CDT), guarantees local convergence. CDT unleashes new potentials in the analysis, interpretation, and design of ANN architectures. The applicability of the method is demonstrated on standard regression and classification problems.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 39
Type of publication
journal article (32)
conference paper (4)
research review (2)
other publication (1)
Type of content
peer-reviewed (31)
other academic/artistic (8)
Author/Editor
Jörnsten, Rebecka, 1 ... (39)
Nelander, Sven (8)
Kling, Teresia, 1985 (6)
Larsson, Ida (6)
Nielsen, Jens B, 196 ... (5)
Elgendy, Ramy (5)
show more...
Nelander, Sven, 1974 (4)
Steineck, Gunnar, 19 ... (4)
Benson, Mikael (4)
Alevronta, Eleftheri ... (4)
Skokic, Viktor, 1982 (4)
Rosén, Emil (4)
Doroszko, Milena (4)
Robinson, Jonathan, ... (4)
Gustafsson, Mika (4)
Gustafsson, Johan, 1 ... (4)
Schmidt, Linnéa, 198 ... (3)
Sánchez, José, 1979 (3)
Sjöberg, Fei (3)
Bull, Cecilia, 1977 (3)
Bergmark, Karin, 196 ... (3)
Dunberger, Gail (3)
Wilderäng, Ulrica (3)
Allerbo, Oskar, 1985 (3)
Johansson, Patrik (3)
Hekmati, Neda (2)
Kogner, Per (2)
Abenius, Tobias, 197 ... (2)
Snygg, Johan, 1963 (2)
Kerkhoven, Eduard, 1 ... (2)
Björnson, Elias, 198 ... (2)
Wang, Hui (2)
Sander, Chris (2)
Malmgren, Helge, 194 ... (2)
Nilsson, Michael, 19 ... (2)
Andersson, Viktor, 1 ... (2)
Szolnoky, Vincent, 1 ... (2)
Syren, Andreas (2)
Kulcsár, Balázs Adam ... (2)
Nestor, Colm (2)
Martinez, David (2)
Kundu, Soumi (2)
Krona, Cecilia (2)
Zhang, Huan (2)
Gawel, Danuta (2)
Roshanzamir, Fariba, ... (2)
Elfineh, Ludmila (2)
Olausson, Karl Holmb ... (2)
Ekström, Seth Reino (2)
Vickhoff, Björn, 194 ... (2)
show less...
University
Chalmers University of Technology (36)
University of Gothenburg (28)
Karolinska Institutet (7)
Uppsala University (6)
Linköping University (5)
Royal Institute of Technology (3)
show more...
Lund University (3)
Marie Cederschiöld högskola (3)
show less...
Language
English (39)
Research subject (UKÄ/SCB)
Medical and Health Sciences (28)
Natural sciences (26)
Engineering and Technology (4)
Agricultural Sciences (1)
Humanities (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view