SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Townend Paul) "

Search: WFRF:(Townend Paul)

  • Result 1-10 of 12
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Matsushita, Kunihiro, et al. (author)
  • Estimated glomerular filtration rate and albuminuria for prediction of cardiovascular outcomes : a collaborative meta-analysis of individual participant data
  • 2015
  • In: LANCET DIABETES & ENDOCRINOLOGY. - 2213-8587. ; 3:7, s. 514-525
  • Journal article (peer-reviewed)abstract
    • Background The usefulness of estimated glomerular filtration rate (eGFR) and albuminuria for prediction of cardiovascular outcomes is controversial. We aimed to assess the addition of creatinine-based eGFR and albuminuria to traditional risk factors for prediction of cardiovascular risk with a meta-analytic approach. Methods We meta-analysed individual-level data for 637 315 individuals without a history of cardiovascular disease from 24 cohorts (median follow-up 4.2-19.0 years) included in the Chronic Kidney Disease Prognosis Consortium. We assessed C statistic difference and reclassification improvement for cardiovascular mortality and fatal and non-fatal cases of coronary heart disease, stroke, and heart failure in a 5 year timeframe, contrasting prediction models for traditional risk factors with and without creatinine-based eGFR, albuminuria (either albumin-to-creatinine ratio [ACR] or semi-quantitative dipstick proteinuria), or both. Findings The addition of eGFR and ACR significantly improved the discrimination of cardiovascular outcomes beyond traditional risk factors in general populations, but the improvement was greater with ACR than with eGFR, and more evident for cardiovascular mortality (C statistic difference 0.0139 [95% CI 0.0105- 0.0174] for ACR and 0.0065 [0.0042-0.0088] for eGFR) and heart failure (0.0196 [0.0108-0.0284] and 0.0109 [0.0059-0.0159]) than for coronary disease (0.0048 [0.0029-0.0067] and 0.0036 [0.0019-0.0054]) and stroke (0.0105 [0.0058-0.0151]and 0.0036 [0.0004-0.0069]). Dipstick proteinuria showed smaller improvement than ACR. The discrimination improvement with eGFR or ACR was especially evident in individuals with diabetes or hypertension, but remained significant with ACR for cardiovascular mortality and heart failure in those without either of these disorders. In individuals with chronic kidney disease, the combination of eGFR and ACR for risk discrimination outperformed most single traditional predictors; the C statistic for cardiovascular mortality fell by 0.0227 (0.0158-0.0296) after omission of eGFR and ACR compared with less than 0.007 for any single modifiable traditional predictor. Interpretation Creatinine-based eGFR and albuminuria should be taken into account for cardiovascular prediction, especially when these measures are already assessed for clinical purpose or if cardiovascular mortality and heart failure are outcomes of interest. ACR could have particularly broad implications for cardiovascular prediction. In populations with chronic kidney disease, the simultaneous assessment of eGFR and ACR could facilitate improved classification of cardiovascular risk, supporting current guidelines for chronic kidney disease. Our results lend some support to also incorporating eGFR and ACR into assessments of cardiovascular risk in the general population.
  •  
2.
  • Kidane, Lidia, 1990-, et al. (author)
  • Automated hyperparameter tuning for adaptive cloud workload prediction
  • 2023
  • In: UCC '23. - New York : Association for Computing Machinery (ACM). - 9798400702341
  • Conference paper (peer-reviewed)abstract
    • Efficient workload prediction is essential for enabling timely resource provisioning in cloud computing environments. However, achieving accurate predictions, ensuring adaptability to changing conditions, and minimizing computation overhead pose significant challenges for workload prediction models. Furthermore, the continuous streaming nature of workload metrics requires careful consideration when applying machine learning and data mining algorithms, as manual hyperparameter optimization can be time-consuming and suboptimal. We propose an automated parameter tuning and adaptation approach for workload prediction models and concept drift detection algorithms utilized in predicting future workload. Our method leverages a pre-built knowledge-base based on historical data statistical features, enabling automatic adjustment of model weights and concept drift detection parameters. Additionally, model adaptation is facilitated through a transfer learning approach. We evaluate the effectiveness of our automated approach by comparing it with static approaches using synthetic and real-world datasets. By automating the parameter tuning process and integrating concept drift detection, in our experiments the proposed method enhances the accuracy and efficiency of workload prediction models by 50%.
  •  
3.
  • Kidane, Lidia, 1990-, et al. (author)
  • When and How to Retrain Machine Learning-based Cloud Management Systems
  • 2022
  • In: 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). - : IEEE. - 9781665497473 - 9781665497480 ; , s. 688-698
  • Conference paper (peer-reviewed)abstract
    • Cloud management systems increasingly rely on machine learning (ML) models to predict incoming workload rates, load, and other system behaviors for efficient dynamic resource management. Current state-of-the-art prediction models demonstrate high accuracy, but assume that data patterns remain stable. However, in production use, systems may face hardware upgrades, changes in user behavior etc. that lead to concept drifts - significant changes in characteristics of data streams over time. To mitigate prediction deterioration, ML models need to be updated - but questions of when and how to best retrain these models are unsolved in the context of cloud management. We present a pilot study that address these questions for one of the most common models for adaptive prediction - Long Short Term Memory (LSTM) - using synthetic and real-world workload data. Our analysis of when to retrain explores approaches for detecting when retraining is required using both concept drift detection and prediction error thresholds, and at what point of retraining should actually take place. Our analysis of how to retrain focuses on the data required for retraining, and what proportion should be taken from before and after the need for retraining is detected. We present initial results that indicate that retraining of existing models can achieve prediction accuracy close to that of newly trained models but for much less cost, and present initial advice for how to provide cloud management systems with support for automatic retraining of ML-based methods.
  •  
4.
  • Patel, Yashwant Singh, et al. (author)
  • Formal models for the energy-aware cloud-edge computing continuum : analysis and challenges
  • 2023
  • In: 2023 IEEE international conference on service-oriented system engineering (SOSE). - : IEEE. - 9798350322392 - 9798350322408 ; , s. 48-59
  • Conference paper (peer-reviewed)abstract
    • Cloud infrastructures are rapidly evolving from centralised systems to geographically distributed federations of edge devices, fog nodes, and clouds. These federations (often referred to as the Cloud-Edge Continuum) are the foundation upon which most modern digital systems depend, and consume enormous amounts of energy. This consumption is becoming a critical issue as society's energy challenges grow, and is a great concern for power grids which must balance the needs of clouds against other users. The Continuum is highly dynamic, mobile, and complex; new methods to improve energy efficiency must be based on formal scientific models that identify and take into account a huge range of heterogeneous components, interactions, stochastic properties, and (potentially contradictory) service-level agreements and stakeholder objectives. Currently, few formal models of federated Cloud-Edge systems exist - and none adequately represent and integrate energy considerations (e.g. multiple providers, renewable energy sources, pricing, and the need to balance consumption over large-areas with other non-Cloud consumers, etc.). This paper conducts a systematic analysis of current approaches to modelling Cloud, Cloud-Edge, and federated Continuum systems with an emphasis on the integration of energy considerations. We identify key omissions in the literature, and propose an initial high-level architecture and approach to begin addressing these - with the ultimate goal to develop a set of integrated models that include data centres, edge devices, fog nodes, energy providers, software workloads, end users, and stakeholder requirements and objectives. We conclude by highlighting the key research challenges that must be addressed to enable meaningful energy-aware Cloud-Edge Continuum modelling and simulation.
  •  
5.
  • Patel, Yashwant Singh, et al. (author)
  • Modeling the green cloud continuum : integrating energy considerations into cloud-edge models
  • 2024
  • In: Cluster Computing. - : Springer. - 1386-7857 .- 1573-7543. ; 27:4, s. 4095-4125
  • Journal article (peer-reviewed)abstract
    • The energy consumption of Cloud–Edge systems is becoming a critical concern economically, environmentally, and societally; some studies suggest data centers and networks will collectively consume 18% of global electrical power by 2030. New methods are needed to mitigate this consumption, e.g. energy-aware workload scheduling, improved usage of renewable energy sources, etc. These schemes need to understand the interaction between energy considerations and Cloud–Edge components. Model-based approaches are an effective way to do this; however, current theoretical Cloud–Edge models are limited, and few consider energy factors. This paper analyses all relevant models proposed between 2016 and 2023, discovers key omissions, and identifies the major energy considerations that need to be addressed for Green Cloud–Edge systems (including interaction with energy providers). We investigate how these can be integrated into existing and aggregated models, and conclude with the high-level architecture of our proposed solution to integrate energy and Cloud–Edge models together.
  •  
6.
  • Porsbjerg, Celeste M., et al. (author)
  • Association between pre-biologic T2-biomarker combinations and response to biologics in patients with severe asthma
  • 2024
  • In: Frontiers in Immunology. - : Frontiers Media S.A.. - 1664-3224. ; 15
  • Journal article (peer-reviewed)abstract
    • Background: To date, studies investigating the association between pre-biologic biomarker levels and post-biologic outcomes have been limited to single biomarkers and assessment of biologic efficacy from structured clinical trials.Aim: To elucidate the associations of pre-biologic individual biomarker levels or their combinations with pre-to-post biologic changes in asthma outcomes in real-life.Methods: This was a registry-based, cohort study using data from 23 countries, which shared data with the International Severe Asthma Registry (May 2017-February 2023). The investigated biomarkers (highest pre-biologic levels) were immunoglobulin E (IgE), blood eosinophil count (BEC) and fractional exhaled nitric oxide (FeNO). Pre- to approximately 12-month post-biologic change for each of three asthma outcome domains (i.e. exacerbation rate, symptom control and lung function), and the association of this change with pre-biologic biomarkers was investigated for individual and combined biomarkers.Results: Overall, 3751 patients initiated biologics and were included in the analysis. No association was found between pre-biologic BEC and pre-to-post biologic change in exacerbation rate for any biologic class. However, higher pre-biologic BEC and FeNO were both associated with greater post-biologic improvement in FEV1 for both anti-IgE and anti-IL5/5R, with a trend for antiI-IL4R alpha. Mean FEV1 improved by 27-178 mL post-anti-IgE as pre-biologic BEC increased (250 to 1000 cells/mu L), and by 43-216 mL and 129-250 mL post-anti-IL5/5R and - anti- IL4R alpha, respectively along the same BEC gradient. Corresponding improvements along a FeNO gradient (25-100 ppb) were 41-274 mL, 69-207 mL and 148-224 mL for anti-IgE, anti-IL5/5R, and anti-IL4R alpha, respectively. Higher baseline BEC was also associated with lower probability of uncontrolled asthma (OR 0.392; p=0.001) post-biologic for anti-IL5/5R. Pre-biologic IgE was a poor predictor of subsequent pre-to-post-biologic change for all outcomes assessed for all biologics. The combination of BEC + FeNO marginally improved the prediction of post-biologic FEV1 increase (adjusted R-2: 0.751), compared to BEC (adjusted R-2: 0.747) or FeNO alone (adjusted R-2: 0.743) (p=0.005 and <0.001, respectively); however, this prediction was not improved by the addition of IgE.Conclusions: The ability of higher baseline BEC, FeNO and their combination to predict biologic-associated lung function improvement may encourage earlier intervention in patients with impaired lung function or at risk of accelerated lung function decline.
  •  
7.
  • Saleh Sedghpour, Mohammad Reza, 1989-, et al. (author)
  • Service mesh and eBPF-powered microservices : a survey and future directions
  • 2022
  • In: 2022 IEEE International Conference on Service-Oriented System Engineering (SOSE). - : IEEE. - 9781665475341 - 9781665475358 ; , s. 176-184
  • Conference paper (peer-reviewed)abstract
    • Modern software development practice has seen a profound shift in architectural design, moving from monolithic approaches to distributed, microservice-based architectures. This allows for much simpler and faster application orchestration and management, especially in cloud-based systems, with the result being that orchestration systems themselves are becoming a key focus of computing research.Orchestration system research addresses many different subject areas, including scheduling, automation, and security. However, the key characteristic that is common throughout is the complex and dynamic nature of distributed, multi-tenant cloud-based microservice systems that must be orchestrated. This complexity has led to many challenges in areas such as inter-service communication, observability, reliability, single cluster to multi-cluster, hybrid environments, and multi-tenancy.The concept of service meshes has been introduced to handle this complexity. In essence, a service mesh is an infrastructure layer built directly into the microservices - or the nodes of orchestrators - as a set of configurable proxies that are responsible for the management, observability, and security of microservices.Service meshes aim to be a full networking solution for microservices; however, they also introduce overhead into a system - this can be significant for low-powered edge devices, as service mesh proxies work in user space and are responsible for processing the incoming and outgoing traffic of each service. To mitigate performance issues caused by these proxies, the industry is pushing the boundaries of monitoring and security to kernel space by employing eBPF for faster and more efficient responses. We propose that the movement towards the use of service meshes as a networking solution for most of the required features by industry - combined with their integration with eBPF - is the next key trend in the evolution of microservices. This paper highlights the challenges of this movement, explores its current state, and discusses future opportunities in the context of microservices. 
  •  
8.
  • Townend, Paul, et al. (author)
  • COGNIT: challenges and vision for a serverless and multi-provider cognitive cloud-edge continuum
  • 2023
  • In: 2023 IEEE International Conference on Edge Computing and Communications (EDGE). - : IEEE. - 9798350304831 - 9798350304848 ; , s. 12-22
  • Conference paper (peer-reviewed)abstract
    • Use of the serverless paradigm in cloud application development is growing rapidly, primarily driven by its promise to free developers from the responsibility of provisioning, operating, and scaling the underlying infrastructure. However, modern cloud-edge infrastructures are characterized by large numbers of disparate providers, constrained resource devices, platform heterogeneity, infrastructural dynamicity, and the need to orchestrate geographically distributed nodes and devices over public networks. This presents significant management complexity that must be addressed if serverless technologies are to be used in production systems. This position paper introduces COGNIT, a major new European initiative aiming to integrate AI technology into cloud-edge management systems to create a Cognitive Cloud reference framework and associated tools for serverless computing at the edge. COGNIT aims to: 1) support an innovative new serverless paradigm for edge application management and enhanced digital sovereignty for users and developers; 2) enable on-demand deployment of large-scale, highly distributed and self-adaptive serverless environments using existing cloud resources; 3) optimize data placement according to changes in energy efficiency heuristics and application demands and behavior; 4) enable secure and trusted execution of serverless runtimes. We identify and discuss seven research challenges related to the integration of serverless technologies with multi-provider Edge infrastructures and present our vision for how these challenges can be solved. We introduce a high-level view of our reference architecture for serverless cloud-edge continuum systems, and detail four motivating real-world use cases that will be used for validation, drawing from domains within Smart Cities, Agriculture and Environment, Energy, and Cybersecurity.
  •  
9.
  •  
10.
  • Tärneberg, William, et al. (author)
  • The 6G Computing Continuum (6GCC) : Meeting the 6G computing challenges
  • 2022
  • In: 2022 1st International Conference on 6G Networking, 6GNet 2022. - : IEEE Computer Society. - 9781665467636
  • Conference paper (peer-reviewed)abstract
    • 6G systems, such as Large Intelligent Surfaces, will require distributed, complex, and coordinated decisions through-out a very heterogeneous and cell free infrastructure. This will require a fundamentally redesigned software infrastructure accompanied by massively distributed and heterogeneous computing resources, vastly different from current wireless networks. To address these challenges, in this paper, we propose and motivate the concept of a 6G Computing Continuum (6GCC) and two research testbeds, to advance the rate and quality of research. 6G Computing Continuum is an end-to-end compute and software platform for realizing large intelligent surfaces and its tenant users and applications. One for addressing the challenges or orchestrating shared computational resources in the wireless domain, implemented on a Large Intelligent Surfaces testbed. Another simulation-based testbed is intended to address scalability and global-scale orchestration challenges.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view