SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Kaski K.) "

Search: WFRF:(Kaski K.)

  • Result 1-25 of 27
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Menden, MP, et al. (author)
  • Community assessment to advance computational prediction of cancer drug combinations in a pharmacogenomic screen
  • 2019
  • In: Nature communications. - : Springer Science and Business Media LLC. - 2041-1723. ; 10:1, s. 2674-
  • Journal article (peer-reviewed)abstract
    • The effectiveness of most cancer targeted therapies is short-lived. Tumors often develop resistance that might be overcome with drug combinations. However, the number of possible combinations is vast, necessitating data-driven approaches to find optimal patient-specific treatments. Here we report AstraZeneca’s large drug combination dataset, consisting of 11,576 experiments from 910 combinations across 85 molecularly characterized cancer cell lines, and results of a DREAM Challenge to evaluate computational strategies for predicting synergistic drug pairs and biomarkers. 160 teams participated to provide a comprehensive methodological development and benchmarking. Winning methods incorporate prior knowledge of drug-target interactions. Synergy is predicted with an accuracy matching biological replicates for >60% of combinations. However, 20% of drug combinations are poorly predicted by all methods. Genomic rationale for synergy predictions are identified, including ADAM17 inhibitor antagonism when combined with PIK3CB/D inhibition contrasting to synergy when combined with other PI3K-pathway inhibitors in PIK3CA mutant cells.
  •  
2.
  • Sieberts, SK, et al. (author)
  • Crowdsourced assessment of common genetic contribution to predicting anti-TNF treatment response in rheumatoid arthritis
  • 2016
  • In: Nature communications. - : Springer Science and Business Media LLC. - 2041-1723. ; 7, s. 12460-
  • Journal article (peer-reviewed)abstract
    • Rheumatoid arthritis (RA) affects millions world-wide. While anti-TNF treatment is widely used to reduce disease progression, treatment fails in ∼one-third of patients. No biomarker currently exists that identifies non-responders before treatment. A rigorous community-based assessment of the utility of SNP data for predicting anti-TNF treatment efficacy in RA patients was performed in the context of a DREAM Challenge (http://www.synapse.org/RA_Challenge). An open challenge framework enabled the comparative evaluation of predictions developed by 73 research groups using the most comprehensive available data and covering a wide range of state-of-the-art modelling methodologies. Despite a significant genetic heritability estimate of treatment non-response trait (h2=0.18, P value=0.02), no significant genetic contribution to prediction accuracy is observed. Results formally confirm the expectations of the rheumatology community that SNP information does not significantly improve predictive performance relative to standard clinical traits, thereby justifying a refocusing of future efforts on collection of other data.
  •  
3.
  •  
4.
  • Dan, GA, et al. (author)
  • Corrigendum
  • 2018
  • In: Europace : European pacing, arrhythmias, and cardiac electrophysiology : journal of the working groups on cardiac pacing, arrhythmias, and cardiac cellular electrophysiology of the European Society of Cardiology. - : Oxford University Press (OUP). - 1532-2092. ; 20:5, s. 738-738
  • Journal article (peer-reviewed)
  •  
5.
  •  
6.
  • Hemingway, H, et al. (author)
  • The effectiveness and cost-effectiveness of biomarkers for the prioritisation of patients awaiting coronary revascularisation: a systematic review and decision model.
  • 2010
  • In: Health Technology Assessment. - : National Coordinating Centre for Health Technology Assessment. - 1366-5278 .- 2046-4924. ; 14:9, s. 1-178
  • Journal article (peer-reviewed)abstract
    • OBJECTIVE: To determine the effectiveness and cost-effectiveness of a range of strategies based on conventional clinical information and novel circulating biomarkers for prioritising patients with stable angina awaiting coronary artery bypass grafting (CABG).DATA SOURCES: MEDLINE and EMBASE were searched from 1966 until 30 November 2008.REVIEW METHODS: We carried out systematic reviews and meta-analyses of literature-based estimates of the prognostic effects of circulating biomarkers in stable coronary disease. We assessed five routinely measured biomarkers and the eight emerging (i.e. not currently routinely measured) biomarkers recommended by the European Society of Cardiology Angina guidelines. The cost-effectiveness of prioritising patients on the waiting list for CABG using circulating biomarkers was compared against a range of alternative formal approaches to prioritisation as well as no formal prioritisation. A decision-analytic model was developed to synthesise data on a range of effectiveness, resource use and value parameters necessary to determine cost-effectiveness. A total of seven strategies was evaluated in the final model.RESULTS: We included 390 reports of biomarker effects in our review. The quality of individual study reports was variable, with evidence of small study (publication) bias and incomplete adjustment for simple clinical information such as age, sex, smoking, diabetes and obesity. The risk of cardiovascular events while on the waiting list for CABG was 3 per 10,000 patients per day within the first 90 days (184 events in 9935 patients with a mean of 59 days at risk). Risk factors associated with an increased risk, and included in the basic risk equation, were age, diabetes, heart failure, previous myocardial infarction and involvement of the left main coronary artery or three-vessel disease. The optimal strategy in terms of cost-effectiveness considerations was a prioritisation strategy employing biomarker information. Evaluating shorter maximum waiting times did not alter the conclusion that a prioritisation strategy with a risk score using estimated glomerular filtration rate (eGFR) was cost-effective. These results were robust to most alternative scenarios investigating other sources of uncertainty. However, the cost-effectiveness of the strategy using a risk score with both eGFR and C-reactive protein (CRP) was potentially sensitive to the cost of the CRP test itself (assumed to be 6 pounds in the base-case scenario).CONCLUSIONS: Formally employing more information in the prioritisation of patients awaiting CABG appears to be a cost-effective approach and may result in improved health outcomes. The most robust results relate to a strategy employing a risk score using conventional clinical information together with a single biomarker (eGFR). The additional prognostic information conferred by collecting the more costly novel circulating biomarker CRP, singly or in combination with other biomarkers, in terms of waiting list prioritisation is unlikely to be cost-effective.
  •  
7.
  • Sahin, O, et al. (author)
  • International Multi-Specialty Expert Physician Preoperative Identification of Extranodal Extension n Oropharyngeal Cancer Patients using Computed Tomography: Prospective Blinded Human Inter-Observer Performance Evaluation
  • 2024
  • In: medRxiv : the preprint server for health sciences. - : Cold Spring Harbor Laboratory.
  • Journal article (peer-reviewed)abstract
    • BackgroundExtranodal extension (ENE) is an important adverse prognostic factor in oropharyngeal cancer (OPC) and is often employed in therapeutic decision making. Clinician-based determination of ENE from radiological imaging is a difficult task with high inter-observer variability. However, the role of clinical specialty on the determination of ENE has been unexplored.MethodsPre-therapy computed tomography (CT) images for 24 human papillomavirus-positive (HPV+) OPC patients were selected for the analysis; 6 scans were randomly chosen to be duplicated, resulting in a total of 30 scans of which 21 had pathologically-confirmed ENE. 34 expert clinician annotators, comprised of 11 radiologists, 12 surgeons, and 11 radiation oncologists separately evaluated the 30 CT scans for ENE and noted the presence or absence of specific radiographic criteria and confidence in their prediction. Discriminative performance was measured using accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and Brier score for each physician. Statistical comparisons of discriminative performance were calculated using Mann Whitney U tests. Significant radiographic factors in correct discrimination of ENE status were determined through a logistic regression analysis. Interobserver agreement was measured using Fleiss’ kappa.ResultsThe median accuracy for ENE discrimination across all specialties was 0.57. There were significant differences between radiologists and surgeons for Brier score (0.33 vs. 0.26), radiation oncologists and surgeons for sensitivity (0.48 vs. 0.69), and radiation oncologists and radiologists/surgeons for specificity (0.89 vs. 0.56). There were no significant differences between specialties for accuracy or AUC. Indistinct capsular contour, nodal necrosis, and nodal matting were significant factors in regression analysis. Fleiss’ kappa was less than 0.6 for all the radiographic criteria, regardless of specialty.ConclusionsDetection of ENE in HPV+OPC patients on CT imaging remains a difficult task with high variability, regardless of clinician specialty. Although some differences do exist between the specialists, they are often minimal. Further research in automated analysis of ENE from radiographic images is likely needed.
  •  
8.
  •  
9.
  • Austrin, Per, 1981-, et al. (author)
  • Tensor network complexity of multilinear maps
  • 2019
  • In: Leibniz International Proceedings in Informatics, LIPIcs. - : Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing. - 9783959770958
  • Conference paper (peer-reviewed)abstract
    • We study tensor networks as a model of arithmetic computation for evaluating multilinear maps. These capture any algorithm based on low border rank tensor decompositions, such as O(nω+ϵ) time matrix multiplication, and in addition many other algorithms such as O(nlog n) time discrete Fourier transform and O∗(2n) time for computing the permanent of a matrix. However tensor networks sometimes yield faster algorithms than those that follow from low-rank decompositions. For instance the fastest known O(n(ω+ϵ)t) time algorithms for counting 3t-cliques can be implemented with tensor networks, even though the underlying tensor has border rank n3t for all t ≥ 2. For counting homomorphisms of a general pattern graph P into a host graph on n vertices we obtain an upper bound of O(n(ω+ϵ) bw(P)/2) where bw(P) is the branchwidth of P. This essentially matches the bound for counting cliques, and yields small improvements over previous algorithms for many choices of P. While powerful, the model still has limitations, and we are able to show a number of unconditional lower bounds for various multilinear maps, including: (a) an Ω(nbw(P)) time lower bound for counting homomorphisms from P to an n-vertex graph, matching the upper bound if ω = 2. In particular for P a v-clique this yields an Ω(nd2v/3e) time lower bound for counting v-cliques, and for P a k-uniform v-hyperclique we obtain an Ω(nv) time lower bound for k ≥ 3, ruling out tensor networks as an approach to obtaining non-trivial algorithms for hyperclique counting and the Max-3-CSP problem. (b) an Ω(20.918n) time lower bound for the permanent of an n × n matrix.
  •  
10.
  •  
11.
  •  
12.
  • Henriksson, Martin, et al. (author)
  • Assessing the cost effectiveness of using prognostic biomarkers with decision models: case study in prioritising patients waiting for coronary artery surgery
  • 2010
  • In: BRITISH MEDICAL JOURNAL. - : BMJ. - 0959-535X. ; 340
  • Journal article (peer-reviewed)abstract
    • Objective To determine the effectiveness and cost effectiveness of using information from circulating biomarkers to inform the prioritisation process of patients with stable angina awaiting coronary artery bypass graft surgery. Design Decision analytical model comparing four prioritisation strategies without biomarkers (no formal prioritisation, two urgency scores, and a risk score) and three strategies based on a risk score using biomarkers: a routinely assessed biomarker (estimated glomerular filtration rate), a novel biomarker (C reactive protein), or both. The order in which to perform coronary artery bypass grafting in a cohort of patients was determined by each prioritisation strategy, and mean lifetime costs and quality adjusted life years (QALYs) were compared. Data sources Swedish Coronary Angiography and Angioplasty Registry (9935 patients with stable angina awaiting coronary artery bypass grafting and then followed up for cardiovascular events after the procedure for 3.8 years), and meta-analyses of prognostic effects (relative risks) of biomarkers. Results The observed risk of cardiovascular events while on the waiting list for coronary artery bypass grafting was 3 per 10 000 patients per day within the first 90 days (184 events in 9935 patients). Using a cost effectiveness threshold of 20 pound 000-30 pound 000 ((sic)22 000-(sic)33 000; $32 000-$48 000) per additional QALY, a prioritisation strategy using a risk score with estimated glomerular filtration rate was the most cost effective strategy (cost per additional QALY was andlt;410 pound compared with the Ontario urgency score). The impact on population health of implementing this strategy was 800 QALYs per 100 000 patients at an additional cost of 245 pound 000 to the National Health Service. The prioritisation strategy using a risk score with C reactive protein was associated with lower QALYs and higher costs compared with a risk score using estimated glomerular filtration rate. Conclusion Evaluating the cost effectiveness of prognostic biomarkers is important even when effects at an individual level are small. Formal prioritisation of patients awaiting coronary artery bypass grafting using a routinely assessed biomarker (estimated glomerular filtration rate) along with simple, routinely collected clinical information was cost effective. Prioritisation strategies based on the prognostic information conferred by C reactive protein, which is not currently measured in this context, or a combination of C reactive protein and estimated glomerular filtration rate, is unlikely to be cost effective. The widespread practice of using only implicit or informal means of clinically ordering the waiting list may be harmful and should be replaced with formal prioritisation approaches.
  •  
13.
  • Kohonen, P, et al. (author)
  • A transcriptomics data-driven gene space accurately predicts liver cytopathology and drug-induced liver injury
  • 2017
  • In: Nature communications. - : Springer Science and Business Media LLC. - 2041-1723. ; 8, s. 15932-
  • Journal article (peer-reviewed)abstract
    • Predicting unanticipated harmful effects of chemicals and drug molecules is a difficult and costly task. Here we utilize a ‘big data compacting and data fusion’—concept to capture diverse adverse outcomes on cellular and organismal levels. The approach generates from transcriptomics data set a ‘predictive toxicogenomics space’ (PTGS) tool composed of 1,331 genes distributed over 14 overlapping cytotoxicity-related gene space components. Involving ∼2.5 × 108 data points and 1,300 compounds to construct and validate the PTGS, the tool serves to: explain dose-dependent cytotoxicity effects, provide a virtual cytotoxicity probability estimate intrinsic to omics data, predict chemically-induced pathological states in liver resulting from repeated dosing of rats, and furthermore, predict human drug-induced liver injury (DILI) from hepatocyte experiments. Analysing 68 DILI-annotated drugs, the PTGS tool outperforms and complements existing tests, leading to a hereto-unseen level of DILI prediction accuracy.
  •  
14.
  •  
15.
  •  
16.
  •  
17.
  • Sahlsten, J, et al. (author)
  • Application of simultaneous uncertainty quantification for image segmentation with probabilistic deep learning: Performance benchmarking of oropharyngeal cancer target delineation as a use-case
  • 2023
  • In: medRxiv : the preprint server for health sciences. - : Cold Spring Harbor Laboratory.
  • Journal article (other academic/artistic)abstract
    • BackgroundOropharyngeal cancer (OPC) is a widespread disease, with radiotherapy being a core treatment modality. Manual segmentation of the primary gross tumor volume (GTVp) is currently employed for OPC radiotherapy planning, but is subject to significant interobserver variability. Deep learning (DL) approaches have shown promise in automating GTVp segmentation, but comparative (auto)confidence metrics of these models predictions has not been well-explored. Quantifying instance-specific DL model uncertainty is crucial to improving clinician trust and facilitating broad clinical implementation. Therefore, in this study, probabilistic DL models for GTVp auto-segmentation were developed using large-scale PET/CT datasets, and various uncertainty auto-estimation methods were systematically investigated and benchmarked.MethodsWe utilized the publicly available 2021 HECKTOR Challenge training dataset with 224 co-registered PET/CT scans of OPC patients with corresponding GTVp segmentations as a development set. A separate set of 67 co-registered PET/CT scans of OPC patients with corresponding GTVp segmentations was used for external validation. Two approximate Bayesian deep learning methods, the MC Dropout Ensemble and Deep Ensemble, both with five submodels, were evaluated for GTVp segmentation and uncertainty performance. The segmentation performance was evaluated using the volumetric Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance at 95% (95HD). The uncertainty was evaluated using four measures from literature: coefficient of variation (CV), structure expected entropy, structure predictive entropy, and structure mutual information, and additionally with our novelDice-riskmeasure. The utility of uncertainty information was evaluated with the accuracy of uncertainty-based segmentation performance prediction using the Accuracy vs Uncertainty (AvU) metric, and by examining the linear correlation between uncertainty estimates and DSC. In addition, batch-based and instance-based referral processes were examined, where the patients with high uncertainty were rejected from the set. In the batch referral process, the area under the referral curve with DSC (R-DSC AUC) was used for evaluation, whereas in the instance referral process, the DSC at various uncertainty thresholds were examined.ResultsBoth models behaved similarly in terms of the segmentation performance and uncertainty estimation. Specifically, the MC Dropout Ensemble had 0.776 DSC, 1.703 mm MSD, and 5.385 mm 95HD. The Deep Ensemble had 0.767 DSC, 1.717 mm MSD, and 5.477 mm 95HD. The uncertainty measure with the highest DSC correlation was structure predictive entropy with correlation coefficients of 0.699 and 0.692 for the MC Dropout Ensemble and the Deep Ensemble, respectively. The highest AvU value was 0.866 for both models. The best performing uncertainty measure for both models was the CV which had R-DSC AUC of 0.783 and 0.782 for the MC Dropout Ensemble and Deep Ensemble, respectively. With referring patients based on uncertainty thresholds from 0.85 validation DSC for all uncertainty measures, on average the DSC improved from the full dataset by 4.7% and 5.0% while referring 21.8% and 22% patients for MC Dropout Ensemble and Deep Ensemble, respectively.ConclusionWe found that many of the investigated methods provide overall similar but distinct utility in terms of predicting segmentation quality and referral performance. These findings are a critical first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation.
  •  
18.
  • Sahlsten, J, et al. (author)
  • Segmentation stability of human head and neck cancer medical images for radiotherapy applications under de-identification conditions: Benchmarking data sharing and artificial intelligence use-cases
  • 2023
  • In: Frontiers in oncology. - : Frontiers Media SA. - 2234-943X. ; 13, s. 1120392-
  • Journal article (peer-reviewed)abstract
    • Demand for head and neck cancer (HNC) radiotherapy data in algorithmic development has prompted increased image dataset sharing. Medical images must comply with data protection requirements so that re-use is enabled without disclosing patient identifiers. Defacing, i.e., the removal of facial features from images, is often considered a reasonable compromise between data protection and re-usability for neuroimaging data. While defacing tools have been developed by the neuroimaging community, their acceptability for radiotherapy applications have not been explored. Therefore, this study systematically investigated the impact of available defacing algorithms on HNC organs at risk (OARs).MethodsA publicly available dataset of magnetic resonance imaging scans for 55 HNC patients with eight segmented OARs (bilateral submandibular glands, parotid glands, level II neck lymph nodes, level III neck lymph nodes) was utilized. Eight publicly available defacing algorithms were investigated: afni_refacer, DeepDefacer, defacer, fsl_deface, mask_face, mri_deface, pydeface, and quickshear. Using a subset of scans where defacing succeeded (N=29), a 5-fold cross-validation 3D U-net based OAR auto-segmentation model was utilized to perform two main experiments: 1.) comparing original and defaced data for training when evaluated on original data; 2.) using original data for training and comparing the model evaluation on original and defaced data. Models were primarily assessed using the Dice similarity coefficient (DSC).ResultsMost defacing methods were unable to produce any usable images for evaluation, while mask_face, fsl_deface, and pydeface were unable to remove the face for 29%, 18%, and 24% of subjects, respectively. When using the original data for evaluation, the composite OAR DSC was statistically higher (p ≤ 0.05) for the model trained with the original data with a DSC of 0.760 compared to the mask_face, fsl_deface, and pydeface models with DSCs of 0.742, 0.736, and 0.449, respectively. Moreover, the model trained with original data had decreased performance (p ≤ 0.05) when evaluated on the defaced data with DSCs of 0.673, 0.693, and 0.406 for mask_face, fsl_deface, and pydeface, respectively.ConclusionDefacing algorithms may have a significant impact on HNC OAR auto-segmentation model training and testing. This work highlights the need for further development of HNC-specific image anonymization methods.
  •  
19.
  •  
20.
  •  
21.
  • Snellman, Jan E., et al. (author)
  • Modelling the interplay between epidemics and regional socio-economics
  • 2022
  • In: Physica A. - : Elsevier BV. - 0378-4371 .- 1873-2119. ; 604
  • Journal article (peer-reviewed)abstract
    • In this study we present a dynamical agent-based model to investigate the interplay between the socio-economy of and SEIRS-type epidemic spreading over a geographical area, divided to smaller area districts and further to smallest area cells. The model treats the populations of cells and authorities of districts as agents, such that the former can reduce their economic activity and the latter can recommend economic activity reduction both with the overall goal to slow down the epidemic spreading. The agents make decisions with the aim of attaining as high socio-economic standings as possible relative to other agents of the same type by evaluating their standings based on the local and regional infection rates, compliance to the authorities’ regulations, regional drops in economic activity, and efforts to mitigate the spread of epidemic. We find that the willingness of population to comply with authorities’ recommendations has the most drastic effect on the epidemic spreading: periodic waves spread almost unimpeded in non-compliant populations, while in compliant ones the spread is minimal with chaotic spreading pattern and significantly lower infection rates. Health and economic concerns of agents turn out to have lesser roles, the former increasing their efforts and the latter decreasing them.
  •  
22.
  • Snellman, Jan E., et al. (author)
  • Socio-economic pandemic modelling : case of Spain
  • 2024
  • In: Scientific Reports. - : Nature Research. - 2045-2322. ; 14:1
  • Journal article (peer-reviewed)abstract
    • A global disaster, such as the recent Covid-19 pandemic, affects every aspect of our lives and there is a need to investigate these highly complex phenomena if one aims to diminish their impact in the health of the population, as well as their socio-economic stability. In this paper we present an attempt to understand the role of the governmental authorities and the response of the rest of the population facing such emergencies. We present a mathematical model that takes into account the epidemiological features of the pandemic and also the actions of people responding to it, focusing only on three aspects of the system, namely, the fear of catching this serious disease, the impact on the economic activities and the compliance of the people to the mitigating measures adopted by the authorities. We apply the model to the specific case of Spain, since there are accurate data available about these three features. We focused on tourism as an example of the economic activity, since this sector of economy is one of the most likely to be affected by the restrictions imposed by the authorities, and because it represents an important part of Spanish economy. The results of numerical calculations agree with the empirical data in such a way that we can acquire a better insight of the different processes at play in such a complex situation, and also in other different circumstances.
  •  
23.
  •  
24.
  •  
25.
  • Tukiainen, T., et al. (author)
  • Mild cognitive impairment associates with concurrent decreases in serum cholesterol and cholesterol-related lipoprotein subclasses
  • 2012
  • In: The Journal of Nutrition, Health & Aging. - : Springer Science and Business Media LLC. - 1279-7707 .- 1760-4788. ; 16:7, s. 631-635
  • Journal article (peer-reviewed)abstract
    • Background and objective: Accumulating evidence suggests that serum lipids are associated with cognitive decline and dementias. However, majority of the existing information concerns only serum total cholesterol (TC) and data at the level of lipoprotein fractions and subclasses is limited. The aim of this study was to explore the levels and trends of main cholesterol and triglyceride measures and eight lipoprotein subclasses during normal aging and the development of mild cognitive impairment by following a group of elderly for six years. Design: Longitudinal. Setting: City of Kuopio, Finland. Participants: 45 elderly individuals of which 20 developed mild cognitive impairment (MCI) during the follow-up. Measurement: On each visit participants underwent an extensive neuropsychological and clinical assessment. Lipoprotein levels were measured via 1H NMR from native serum samples. Results: Serum cholesterol and many primarily cholesterol-associated lipoprotein measures clearly decreased in MCI while the trends were increasing for those elderly people who maintained normal cognition. Conclusion: These findings suggest that a decreasing trend in serum cholesterol measures in elderly individuals may suffice as an indication for more detailed inspection for potential signs of cognitive decline.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 27

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view