SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Taheri M) "

Sökning: WFRF:(Taheri M)

  • Resultat 1-50 av 52
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Tran, K. B., et al. (författare)
  • The global burden of cancer attributable to risk factors, 2010-19: a systematic analysis for the Global Burden of Disease Study 2019
  • 2022
  • Ingår i: Lancet. - 0140-6736. ; 400:10352, s. 563-591
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Understanding the magnitude of cancer burden attributable to potentially modifiable risk factors is crucial for development of effective prevention and mitigation strategies. We analysed results from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 to inform cancer control planning efforts globally. Methods The GBD 2019 comparative risk assessment framework was used to estimate cancer burden attributable to behavioural, environmental and occupational, and metabolic risk factors. A total of 82 risk-outcome pairs were included on the basis of the World Cancer Research Fund criteria. Estimated cancer deaths and disability-adjusted life-years (DALYs) in 2019 and change in these measures between 2010 and 2019 are presented. Findings Globally, in 2019, the risk factors included in this analysis accounted for 4.45 million (95% uncertainty interval 4.01-4.94) deaths and 105 million (95.0-116) DALYs for both sexes combined, representing 44.4% (41.3-48.4) of all cancer deaths and 42.0% (39.1-45.6) of all DALYs. There were 2.88 million (2.60-3.18) risk-attributable cancer deaths in males (50.6% [47.8-54.1] of all male cancer deaths) and 1.58 million (1.36-1.84) risk-attributable cancer deaths in females (36.3% [32.5-41.3] of all female cancer deaths). The leading risk factors at the most detailed level globally for risk-attributable cancer deaths and DALYs in 2019 for both sexes combined were smoking, followed by alcohol use and high BMI. Risk-attributable cancer burden varied by world region and Socio-demographic Index (SDI), with smoking, unsafe sex, and alcohol use being the three leading risk factors for risk-attributable cancer DALYs in low SDI locations in 2019, whereas DALYs in high SDI locations mirrored the top three global risk factor rankings. From 2010 to 2019, global risk-attributable cancer deaths increased by 20.4% (12.6-28.4) and DALYs by 16.8% (8.8-25.0), with the greatest percentage increase in metabolic risks (34.7% [27.9-42.8] and 33.3% [25.8-42.0]). Interpretation The leading risk factors contributing to global cancer burden in 2019 were behavioural, whereas metabolic risk factors saw the largest increases between 2010 and 2019. Reducing exposure to these modifiable risk factors would decrease cancer mortality and DALY rates worldwide, and policies should be tailored appropriately to local cancer risk factor burden. Copyright (C) 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license.
  •  
2.
  •  
3.
  • Alvarez, E. M., et al. (författare)
  • The global burden of adolescent and young adult cancer in 2019: a systematic analysis for the Global Burden of Disease Study 2019
  • 2022
  • Ingår i: Lancet Oncology. - : Elsevier BV. - 1470-2045. ; 23:1, s. 27-52
  • Tidskriftsartikel (refereegranskat)abstract
    • Background In estimating the global burden of cancer, adolescents and young adults with cancer are often overlooked, despite being a distinct subgroup with unique epidemiology, clinical care needs, and societal impact. Comprehensive estimates of the global cancer burden in adolescents and young adults (aged 15-39 years) are lacking. To address this gap, we analysed results from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, with a focus on the outcome of disability-adjusted life-years (DALYs), to inform global cancer control measures in adolescents and young adults. Methods Using the GBD 2019 methodology, international mortality data were collected from vital registration systems, verbal autopsies, and population-based cancer registry inputs modelled with mortality-to-incidence ratios (MIRs). Incidence was computed with mortality estimates and corresponding MIRs. Prevalence estimates were calculated using modelled survival and multiplied by disability weights to obtain years lived with disability (YLDs). Years of life lost (YLLs) were calculated as age-specific cancer deaths multiplied by the standard life expectancy at the age of death. The main outcome was DALYs (the sum of YLLs and YLDs). Estimates were presented globally and by Socio-demographic Index (SDI) quintiles (countries ranked and divided into five equal SDI groups), and all estimates were presented with corresponding 95% uncertainty intervals (UIs). For this analysis, we used the age range of 15-39 years to define adolescents and young adults. Findings There were 1.19 million (95% UI 1.11-1.28) incident cancer cases and 396 000 (370 000-425 000) deaths due to cancer among people aged 15-39 years worldwide in 2019. The highest age-standardised incidence rates occurred in high SDI (59.6 [54.5-65.7] per 100 000 person-years) and high-middle SDI countries (53.2 [48.8-57.9] per 100 000 person-years), while the highest age-standardised mortality rates were in low-middle SDI (14.2 [12.9-15.6] per 100 000 person-years) and middle SDI (13.6 [12.6-14.8] per 100 000 person-years) countries. In 2019, adolescent and young adult cancers contributed 23.5 million (21.9-25.2) DALYs to the global burden of disease, of which 2.7% (1.9-3.6) came from YLDs and 97.3% (96.4-98.1) from YLLs. Cancer was the fourth leading cause of death and tenth leading cause of DALYs in adolescents and young adults globally. Interpretation Adolescent and young adult cancers contributed substantially to the overall adolescent and young adult disease burden globally in 2019. These results provide new insights into the distribution and magnitude of the adolescent and young adult cancer burden around the world. With notable differences observed across SDI settings, these estimates can inform global and country-level cancer control efforts. Copyright (C) 2021 The Author(s). Published by Elsevier Ltd.
  •  
4.
  • Bryazka, D., et al. (författare)
  • Population-level risks of alcohol consumption by amount, geography, age, sex, and year: a systematic analysis for the Global Burden of Disease Study 2020
  • 2022
  • Ingår i: Lancet. - 0140-6736. ; 400:10347, s. 185-235
  • Tidskriftsartikel (refereegranskat)abstract
    • Background The health risks associated with moderate alcohol consumption continue to be debated. Small amounts of alcohol might lower the risk of some health outcomes but increase the risk of others, suggesting that the overall risk depends, in part, on background disease rates, which vary by region, age, sex, and year. Methods For this analysis, we constructed burden-weighted dose-response relative risk curves across 22 health outcomes to estimate the theoretical minimum risk exposure level (TMREL) and non-drinker equivalence (NDE), the consumption level at which the health risk is equivalent to that of a non-drinker, using disease rates from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2020 for 21 regions, including 204 countries and territories, by 5-year age group, sex, and year for individuals aged 15-95 years and older from 1990 to 2020. Based on the NDE, we quantified the population consuming harmful amounts of alcohol. Findings The burden-weighted relative risk curves for alcohol use varied by region and age. Among individuals aged 15-39 years in 2020, the TMREL varied between 0 (95% uncertainty interval 0-0) and 0.603 (0.400-1.00) standard drinks per day, and the NDE varied between 0.002 (0-0) and 1.75 (0.698-4.30) standard drinks per day. Among individuals aged 40 years and older, the burden-weighted relative risk curve was J-shaped for all regions, with a 2020 TMREL that ranged from 0.114 (0-0.403) to 1.87 (0.500-3.30) standard drinks per day and an NDE that ranged between 0.193 (0-0.900) and 6.94 (3.40-8.30) standard drinks per day. Among individuals consuming harmful amounts of alcohol in 2020, 59.1% (54.3-65.4) were aged 15-39 years and 76.9% (7.0-81.3) were male. Interpretation There is strong evidence to support recommendations on alcohol consumption varying by age and location. Stronger interventions, particularly those tailored towards younger individuals, are needed to reduce the substantial global health loss attributable to alcohol. Copyright (C) 2022 The Author(s). Published by Elsevier Ltd.
  •  
5.
  • Ikuta, K. S., et al. (författare)
  • Global mortality associated with 33 bacterial pathogens in 2019: a systematic analysis for the Global Burden of Disease Study 2019
  • 2022
  • Ingår i: Lancet. - : Elsevier BV. - 0140-6736. ; 400:10369, s. 2221-2248
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Reducing the burden of death due to infection is an urgent global public health priority. Previous studies have estimated the number of deaths associated with drug-resistant infections and sepsis and found that infections remain a leading cause of death globally. Understanding the global burden of common bacterial pathogens (both susceptible and resistant to antimicrobials) is essential to identify the greatest threats to public health. To our knowledge, this is the first study to present global comprehensive estimates of deaths associated with 33 bacterial pathogens across 11 major infectious syndromes. Methods We estimated deaths associated with 33 bacterial genera or species across 11 infectious syndromes in 2019 using methods from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, in addition to a subset of the input data described in the Global Burden of Antimicrobial Resistance 2019 study. This study included 343 million individual records or isolates covering 11 361 study-location-years. We used three modelling steps to estimate the number of deaths associated with each pathogen: deaths in which infection had a role, the fraction of deaths due to infection that are attributable to a given infectious syndrome, and the fraction of deaths due to an infectious syndrome that are attributable to a given pathogen. Estimates were produced for all ages and for males and females across 204 countries and territories in 2019. 95% uncertainty intervals (UIs) were calculated for final estimates of deaths and infections associated with the 33 bacterial pathogens following standard GBD methods by taking the 2.5th and 97.5th percentiles across 1000 posterior draws for each quantity of interest. Findings From an estimated 13.7 million (95% UI 10.9-17.1) infection-related deaths in 2019, there were 7.7 million deaths (5.7-10.2) associated with the 33 bacterial pathogens (both resistant and susceptible to antimicrobials) across the 11 infectious syndromes estimated in this study. We estimated deaths associated with the 33 bacterial pathogens to comprise 13.6% (10.2-18.1) of all global deaths and 56.2% (52.1-60.1) of all sepsis-related deaths in 2019. Five leading pathogens-Staphylococcus aureus, Escherichia coli, Streptococcus pneumoniae, Klebsiella pneumoniae, and Pseudomonas aeruginosa-were responsible for 54.9% (52.9-56.9) of deaths among the investigated bacteria. The deadliest infectious syndromes and pathogens varied by location and age. The age-standardised mortality rate associated with these bacterial pathogens was highest in the sub-Saharan Africa super-region, with 230 deaths (185-285) per 100 000 population, and lowest in the high-income super-region, with 52.2 deaths (37.4-71.5) per 100 000 population. S aureus was the leading bacterial cause of death in 135 countries and was also associated with the most deaths in individuals older than 15 years, globally. Among children younger than 5 years, S pneumoniae was the pathogen associated with the most deaths. In 2019, more than 6 million deaths occurred as a result of three bacterial infectious syndromes, with lower respiratory infections and bloodstream infections each causing more than 2 million deaths and peritoneal and intra-abdominal infections causing more than 1 million deaths. Interpretation The 33 bacterial pathogens that we investigated in this study are a substantial source of health loss globally, with considerable variation in their distribution across infectious syndromes and locations. Compared with GBD Level 3 underlying causes of death, deaths associated with these bacteria would rank as the second leading cause of death globally in 2019; hence, they should be considered an urgent priority for intervention within the global health community. Strategies to address the burden of bacterial infections include infection prevention, optimised use of antibiotics, improved capacity for microbiological analysis, vaccine development, and improved and more pervasive use of available vaccines. These estimates can be used to help set priorities for vaccine need, demand, and development. Copyright (c) 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license.
  •  
6.
  • Sheena, B. S., et al. (författare)
  • Global, regional, and national burden of hepatitis B, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019
  • 2022
  • Ingår i: Lancet Gastroenterology & Hepatology. - : Elsevier BV. - 2468-1253. ; 7:9, s. 796-829
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Combating viral hepatitis is part of the UN Sustainable Development Goals (SDGs), and WHO has put forth hepatitis B elimination targets in its Global Health Sector Strategy on Viral Hepatitis (WHO-GHSS) and Interim Guidance for Country Validation of Viral Hepatitis Elimination (WHO Interim Guidance). We estimated the global, regional, and national prevalence of hepatitis B virus (HBV), as well as mortality and disability-adjusted life-years (DALYs) due to HBV, as part of the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019. This included estimates for 194 WHO member states, for which we compared our estimates to WHO elimination targets. Methods The primary data sources were population-based serosurveys, claims and hospital discharges, cancer registries, vital registration systems, and published case series. We estimated chronic HBV infection and the burden of HBV-related diseases, defined as an aggregate of cirrhosis due to hepatitis B, liver cancer due to hepatitis B, and acute hepatitis B. We used DisMod-MR 2.1, a Bayesian mixed-effects meta-regression tool, to estimate the prevalence of chronic HBV infection, cirrhosis, and aetiological proportions of cirrhosis. We used mortality-to-incidence ratios modelled with spatiotemporal Gaussian process regression to estimate the incidence of liver cancer. We used the Cause of Death Ensemble modelling (CODEm) model, a tool that selects models and covariates on the basis of out-ofsample performance, to estimate mortality due to cirrhosis, liver cancer, and acute hepatitis B. Findings In 2019, the estimated global, all-age prevalence of chronic HBV infection was 4 center dot 1% (95% uncertainty interval [UI] 3 center dot 7 to 4 center dot 5), corresponding to 316 million (284 to 351) infected people. There was a 31 center dot 3% (29 center dot 0 to 33 center dot 9) decline in all-age prevalence between 1990 and 2019, with a more marked decline of 76 center dot 8% (76 center dot 2 to 77 center dot 5) in prevalence in children younger than 5 years. HBV-related diseases resulted in 555 000 global deaths (487 000 to 630 000) in 2019. The number of HBV-related deaths increased between 1990 and 2019 (by 5 center dot 9% [-5 center dot 6 to 19 center dot 2]) and between 2015 and 2019 (by 2 center dot 9% [-5 center dot 9 to 11 center dot 3]). By contrast, all-age and age-standardised death rates due to HBV-related diseases decreased during these periods. We compared estimates for 2019 in 194 WHO locations to WHO-GHSS 2020 targets, and found that four countries achieved a 10% reduction in deaths, 15 countries achieved a 30% reduction in new cases, and 147 countries achieved a 1% prevalence in children younger than 5 years. As of 2019, 68 of 194 countries had already achieved the 2030 target proposed in WHO Interim Guidance of an all-age HBV-related death rate of four per 100 000. Interpretation The prevalence of chronic HBV infection declined over time, particularly in children younger than 5 years, since the introduction of hepatitis B vaccination. HBV-related death rates also decreased, but HBV-related death counts increased as a result of population growth, ageing, and cohort effects. By 2019, many countries had met the interim seroprevalence target for children younger than 5 years, but few countries had met the WHO-GHSS interim targets for deaths and new cases. Progress according to all indicators must be accelerated to meet 2030 targets, and there are marked disparities in burden and progress across the world. HBV interventions, such as vaccination, testing, and treatment, must be strategically supported and scaled up to achieve elimination.
  •  
7.
  • Sharma, R., et al. (författare)
  • Global, regional, and national burden of colorectal cancer and its risk factors, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019
  • 2022
  • Ingår i: Lancet Gastroenterology & Hepatology. - : Elsevier BV. - 2468-1253. ; 7:7, s. 627-647
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Colorectal cancer is the third leading cause of cancer deaths worldwide. Given the recent increasing trends in colorectal cancer incidence globally, up-to-date information on the colorectal cancer burden could guide screening, early detection, and treatment strategies, and help effectively allocate resources. We examined the temporal patterns of the global, regional, and national burden of colorectal cancer and its risk factors in 204 countries and territories across the past three decades. Methods Estimates of incidence, mortality, and disability-adjusted life years (DALYs) for colorectal cancer were generated as a part of the Global Burden of Diseases, Injuries and Risk Factors Study (GBD) 2019 by age, sex, and geographical location for the period 1990-2019. Mortality estimates were produced using the cause of death ensemble model. We also calculated DALYs attributable to risk factors that had evidence of causation with colorectal cancer. Findings Globally, between 1990 and 2019, colorectal cancer incident cases more than doubled, from 842 098 (95% uncertainty interval [UI] 810 408-868 574) to 2.17 million (2.00-2.34), and deaths increased from 518 126 (493 682-537 877) to 1.09 million (1.02-1.15). The global age-standardised incidence rate increased from 22.2 (95% UI 21.3-23.0) per 100 000 to 26.7 (24.6-28.9) per 100 000, whereas the age-standardised mortality rate decreased from 14.3 (13.5-14.9) per 100 000 to 13.7 (12.6-14.5) per 100 000 and the age-standardised DALY rate decreased from 308.5 (294.7-320.7) per 100 000 to 295.5 (275.2-313.0) per 100 000 from 1990 through 2019. Taiwan (province of China; 62.0 [48.9-80.0] per 100 000), Monaco (60.7 [48.5-73.6] per 100 000), and Andorra (56.6 [42.8-71.9] per 100 000) had the highest age-standardised incidence rates, while Greenland (31.4 [26.0-37.1] per 100 000), Brunei (30.3 [26.6-34.1] per 100 000), and Hungary (28.6 [23.6-34.0] per 100 000) had the highest age-standardised mortality rates. From 1990 through 2019, a substantial rise in incidence rates was observed in younger adults (age <50 years), particularly in high Socio-demographic Index (SDI) countries. Globally, a diet low in milk (15.6%), smoking (13.3%), a diet low in calcium (12.9%), and alcohol use (9.9%) were the main contributors to colorectal cancer DALYs in 2019. Interpretation The increase in incidence rates in people younger than 50 years requires vigilance from researchers, clinicians, and policy makers and a possible reconsideration of screening guidelines. The fast-rising burden in low SDI and middle SDI countries in Asia and Africa calls for colorectal cancer prevention approaches, greater awareness, and cost-effective screening and therapeutic options in these regions. Copyright (C) 2022 The Author(s). Published by Elsevier Ltd.
  •  
8.
  •  
9.
  •  
10.
  •  
11.
  •  
12.
  •  
13.
  •  
14.
  •  
15.
  •  
16.
  •  
17.
  •  
18.
  • Fages, A., et al. (författare)
  • Tracking Five Millennia of Horse Management with Extensive Ancient Genome Time Series
  • 2019
  • Ingår i: Cell. - : Elsevier BV. - 0092-8674. ; 177:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Horse domestication revolutionized warfare and accelerated travel, trade, and the geographic expansion of languages. Here, we present the largest DNA time series for a non-human organism to date, including genome-scale data from 149 ancient animals and 129 ancient genomes (>= 1-fold coverage), 87 of which are new. This extensive dataset allows us to assess the modem legacy of past equestrian civilisations. We find that two extinct horse lineages existed during early domestication, one at the far western (Iberia) and the other at the far eastern range (Siberia) of Eurasia. None of these contributed significantly to modern diversity. We show that the influence of Persian-related horse lineages increased following the Islamic conquests in Europe and Asia. Multiple alleles associated with elite-racing, including at the MSTN "speed gene," only rose in popularity within the last millennium. Finally, the development of modem breeding impacted genetic diversity more dramatically than the previous millennia of human management.
  •  
19.
  • Wortman, J. R., et al. (författare)
  • The 2008 update of the Aspergillus nidulans genome annotation: A community effort
  • 2009
  • Ingår i: Fungal Genetics and Biology. - : Elsevier BV. - 1096-0937 .- 1087-1845. ; 46, s. S2-S13
  • Tidskriftsartikel (refereegranskat)abstract
    • The identification and annotation of protein-coding genes is one of the primary goals of whole-genome sequencing projects, and the accuracy of predicting the primary protein products of gene expression is vital to the interpretation of the available data and the design of downstream functional applications. Nevertheless, the comprehensive annotation of eukaryotic genomes remains a considerable challenge. Many genomes submitted to public databases, including those of major model organisms, contain significant numbers of wrong and incomplete gene predictions. We present a community-based reannotation of the Aspergillus nidulans genome with the primary goal of increasing the number and quality of protein functional assignments through the careful review of experts in the field of fungal biology. (C) 2009 Elsevier Inc. All rights reserved.
  •  
20.
  • Abdulov, N. A., et al. (författare)
  • TMDlib2 and TMDplotter : a platform for 3D hadron structure studies
  • 2021
  • Ingår i: European Physical Journal C. - : Springer Science and Business Media LLC. - 1434-6044 .- 1434-6052. ; 81:8
  • Tidskriftsartikel (refereegranskat)abstract
    • A common library, TMDlib2, for Transverse-Momentum-Dependent distributions (TMDs) and unintegrated parton distributions (uPDFs) is described, which allows for easy access of commonly used TMDs and uPDFs, providing a three-dimensional (3D) picture of the partonic structure of hadrons. The tool TMDplotter allows for web-based plotting of distributions implemented in TMDlib2, together with collinear pdfs as available in LHAPDF.
  •  
21.
  • Carreras, A., et al. (författare)
  • In vivo genome and base editing of a human PCSK9 knock-in hypercholesterolemic mouse model
  • 2019
  • Ingår i: Bmc Biology. - : Springer Science and Business Media LLC. - 1741-7007. ; 17
  • Tidskriftsartikel (refereegranskat)abstract
    • Background Plasma concentration of low-density lipoprotein (LDL) cholesterol is a well-established risk factor for cardiovascular disease. Inhibition of proprotein convertase subtilisin/kexin type 9 (PCSK9), which regulates cholesterol homeostasis, has recently emerged as an approach to reduce cholesterol levels. The development of humanized animal models is an important step to validate and study human drug targets, and use of genome and base editing has been proposed as a mean to target disease alleles.ResultsTo address the lack of validated models to test the safety and efficacy of techniques to target human PCSK9, we generated a liver-specific human PCSK9 knock-in mouse model (hPCSK9-KI). We showed that plasma concentrations of total cholesterol were higher in hPCSK9-KI than in wildtype mice and increased with age. Treatment with evolocumab, a monoclonal antibody that targets human PCSK9, reduced cholesterol levels in hPCSK9-KI but not in wildtype mice, showing that the hypercholesterolemic phenotype was driven by overexpression of human PCSK9. CRISPR-Cas9-mediated genome editing of human PCSK9 reduced plasma levels of human and not mouse PCSK9, and in parallel reduced plasma concentrations of total cholesterol; genome editing of mouse Pcsk9 did not reduce cholesterol levels. Base editing using a guide RNA that targeted human and mouse PCSK9 reduced plasma levels of human and mouse PCSK9 and total cholesterol. In our mouse model, base editing was more precise than genome editing, and no off-target editing nor chromosomal translocations were identified.ConclusionsHere, we describe a humanized mouse model with liver-specific expression of human PCSK9 and a human-like hypercholesterolemia phenotype, and demonstrate that this mouse can be used to evaluate antibody and gene editing-based (genome and base editing) therapies to modulate the expression of human PCSK9 and reduce cholesterol levels. We predict that this mouse model will be used in the future to understand the efficacy and safety of novel therapeutic approaches for hypercholesterolemia.
  •  
22.
  • Abdulhamid, M. I., et al. (författare)
  • Azimuthal correlations of high transverse momentum jets at next-to-leading order in the parton branching method
  • 2022
  • Ingår i: European Physical Journal C. - : Springer Science and Business Media LLC. - 1434-6044 .- 1434-6052. ; 82:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The azimuthal correlation, Δ ϕ12, of high transverse momentum jets in pp collisions at s=13 TeV is studied by applying PB-TMD distributions to NLO calculations via MCatNLO together with the PB-TMD parton shower. A very good description of the cross section as a function of Δ ϕ12 is observed. In the back-to-back region of Δ ϕ12→ π, a very good agreement is observed with the PB-TMD Set 2 distributions while significant deviations are obtained with the PB-TMD Set 1 distributions. Set 1 uses the evolution scale while Set 2 uses transverse momentum as an argument in αs, and the above observation therefore confirms the importance of an appropriate soft-gluon coupling in angular ordered parton evolution. The total uncertainties of the predictions are dominated by the scale uncertainties of the matrix element, while the uncertainties coming from the PB-TMDs and the corresponding PB-TMD shower are very small. The Δ ϕ12 measurements are also compared with predictions using MCatNLO together Pythia8, illustrating the importance of details of the parton shower evolution.
  •  
23.
  • Wimberger, Sandra, 1987, et al. (författare)
  • Simultaneous inhibition of DNA-PK and Pol ϴ improves integration efficiency and precision of genome editing
  • 2023
  • Ingår i: Nature Communications. ; 14:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Genome editing, specifically CRISPR/Cas9 technology, has revolutionized biomedical research and offers potential cures for genetic diseases. Despite rapid progress, low efficiency of targeted DNA integration and generation of unintended mutations represent major limitations for genome editing applications caused by the interplay with DNA double-strand break repair pathways. To address this, we conduct a large-scale compound library screen to identify targets for enhancing targeted genome insertions. Our study reveals DNA-dependent protein kinase (DNA-PK) as the most effective target to improve CRISPR/Cas9-mediated insertions, confirming previous findings. We extensively characterize AZD7648, a selective DNA-PK inhibitor, and find it to significantly enhance precise gene editing. We further improve integration efficiency and precision by inhibiting DNA polymerase theta (Pol ϴ). The combined treatment, named 2iHDR, boosts templated insertions to 80% efficiency with minimal unintended insertions and deletions. Notably, 2iHDR also reduces off-target effects of Cas9, greatly enhancing the fidelity and performance of CRISPR/Cas9 gene editing. Low efficiency of target DNA integration remains a challenge in genome engineering. Here the authors perform large-scale compound library and genetic screens to identify targets that enhance gene editing: they see that combined DNA-PK and Pol ϴ inhibition with potent compounds increases editing efficiency and precision.
  •  
24.
  • Taheri, M., et al. (författare)
  • DeepAxe : A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators
  • 2023
  • Ingår i: Proceedings - International Symposium on Quality Electronic Design, ISQED. - : IEEE Computer Society. - 9798350334753
  • Konferensbidrag (refereegranskat)abstract
    • While the role of Deep Neural Networks (DNNs) in a wide range of safety-critical applications is expanding, emerging DNNs experience massive growth in terms of computation power. It raises the necessity of improving the reliability of DNN accelerators yet reducing the computational burden on the hardware platforms, i.e. reducing the energy consumption and execution time as well as increasing the efficiency of DNN accelerators. Therefore, the trade-off between hardware performance, i.e. area, power and delay, and the reliability of the DNN accelerator implementation becomes critical and requires tools for analysis.In this paper, we propose a framework DeepAxe for design space exploration for FPGA-based implementation of DNNs by considering the trilateral impact of applying functional approximation on accuracy, reliability and hardware performance. The framework enables selective approximation of reliability-critical DNNs, providing a set of Pareto-optimal DNN implementation design space points for the target resource utilization requirements. The design flow starts with a pre-trained network in Keras, uses an innovative high-level synthesis environment DeepHLS and results in a set of Pareto-optimal design space points as a guide for the designer. The framework is demonstrated on a case study of custom and state-of-the-art DNNs and datasets. 
  •  
25.
  • Ahmadilivani, M. H., et al. (författare)
  • A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks
  • 2024
  • Ingår i: ACM Computing Surveys. - : ASSOC COMPUTING MACHINERY. - 0360-0300 .- 1557-7341. ; 56:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial Intelligence (AI) and, in particular, Machine Learning (ML), have emerged to be utilized in various applications due to their capability to learn how to solve complex problems. Over the past decade, rapid advances in ML have presented Deep Neural Networks (DNNs) consisting of a large number of neurons and layers. DNN Hardware Accelerators (DHAs) are leveraged to deploy DNNs in the target applications. Safety-critical applications, where hardware faults/errors would result in catastrophic consequences, also benefit from DHAs. Therefore, the reliability of DNNs is an essential subject of research. In recent years, several studies have been published accordingly to assess the reliability of DNNs. In this regard, various reliability assessment methods have been proposed on a variety of platforms and applications. Hence, there is a need to summarize the state-of-the-art to identify the gaps in the study of the reliability of DNNs. In this work, we conduct a Systematic Literature Review (SLR) on the reliability assessment methods of DNNs to collect relevant research works as much as possible, present a categorization of them, and address the open challenges. Through this SLR, three kinds of methods for reliability assessment of DNNs are identified, including Fault Injection (FI), Analytical, and Hybrid methods. Since the majority of works assess the DNN reliability by FI, we characterize different approaches and platforms of the FI method comprehensively. Moreover, Analytical and Hybrid methods are propounded. Thus, different reliability assessment methods for DNNs have been elaborated on their conducted DNN platforms and reliability evaluation metrics. Finally, we highlight the advantages and disadvantages of the identified methods and address the open challenges in the research area. We have concluded that Analytical and Hybrid methods are light-weight yet sufficiently accurate and have the potential to be extended in future research and to be utilized in establishing novel DNN reliability assessment frameworks.
  •  
26.
  • Ahmadilivani, M. H., et al. (författare)
  • Enhancing Fault Resilience of QNNs by Selective Neuron Splitting
  • 2023
  • Ingår i: AICAS 2023 - IEEE International Conference on Artificial Intelligence Circuits and Systems, Proceeding. - : Institute of Electrical and Electronics Engineers Inc.. - 9798350332674
  • Konferensbidrag (refereegranskat)abstract
    • The superior performance of Deep Neural Networks (DNNs) has led to their application in various aspects of human life. Safety-critical applications are no exception and impose rigorous reliability requirements on DNNs. Quantized Neural Networks (QNNs) have emerged to tackle the complexity of DNN accelerators, however, they are more prone to reliability issues.In this paper, a recent analytical resilience assessment method is adapted for QNNs to identify critical neurons based on a Neuron Vulnerability Factor (NVF). Thereafter, a novel method for splitting the critical neurons is proposed that enables the design of a Lightweight Correction Unit (LCU) in the accelerator without redesigning its computational part.The method is validated by experiments on different QNNs and datasets. The results demonstrate that the proposed method for correcting the faults has a twice smaller overhead than a selective Triple Modular Redundancy (TMR) while achieving a similar level of fault resiliency. 
  •  
27.
  • Al-Wathinani, A. M., et al. (författare)
  • Raising Awareness of Hearing and Communication Disorders Among Emergency Medical Services Students: Are Knowledge Translation Workshops Useful?
  • 2022
  • Ingår i: Disaster Medicine and Public Health Preparedness. - : Cambridge University Press (CUP). - 1935-7893 .- 1938-744X. ; 17
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: In numerous countries, emergency medical services (EMS) students receive curriculum training in effective patient-provider communication, but most of this training assumes patients have intact communication capabilities, leading to a lack of preparedness to interact with patients, who have communication disorders. In such cases, first responders could end up delivering suboptimal care or possibly wrong procedures that could harm the disabled person. Method: A quasi-experimental design (pretest-posttest) was used to assess the knowledge of EMS students both before and after a translation workshop on how to deal with patients who have hearing and communication disorders during emergencies. Comparisons between pretest and posttest scores were examined using the Wilcoxon signed rank test. The level of knowledge scores was compared before and after the workshop. Results: The results indicated that EMS students' scores improved after the workshop. There was a 0.763 increase in the average score of knowledge level. The results of this study show that knowledge translation workshops are a useful intervention to enhance the level of knowledge among EMS students when interacting with hearing and communication patients. Conclusions: Our results show that such training workshops lead to better performance. Communication is a vital element in a medical encounter between health care providers and patients at all levels of health care but specifically in the prehospital arena. Insufficient or lack of communication with a vulnerable population, who may suffer from various disabilities, has a significant impact on the outcome of treatment or emergency management.
  •  
28.
  • Angeles-Martinez, R., et al. (författare)
  • Transverse Momentum Dependent (TMD) Parton Distribution Functions: Status and Prospects
  • 2015
  • Ingår i: Acta Physica Polonica. Series B: Elementary Particle Physics, Nuclear Physics, Statistical Physics, Theory of Relativity, Field Theory. - 0587-4254. ; 46:12, s. 2501-2534
  • Tidskriftsartikel (refereegranskat)abstract
    • We review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum q(T) spectra of Higgs and vector bosons for low q(T), and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMDLIB, to parton density fits and parameterizations.
  •  
29.
  •  
30.
  • Martinsson, U., et al. (författare)
  • Why are not ALL Paediatric Patients Treated With Protons? : A Complete National Cohort From Sweden 2016-2019
  • 2020
  • Ingår i: Pediatric Blood & Cancer. - : John Wiley & Sons. - 1545-5009 .- 1545-5017. ; 67:S4, s. S79-S79
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • Background and Aims: A proton therapy (PT) facility – Skandion clinic- opened in Uppsala in august 2015. It was stated that all children inSweden, benefitting from PT, ought to be sent there. PT plans are prepared at six university-based radiotherapy (RT) centres and assessedby a national board. There are no additional costs for the families compared to other radiotherapy (RT) options, not even for travel and lodging.Methods: Since 2008 all children receiving RT with any modality areregistered in the Swedish Radtox registry. Inclusion is populationbased, prospective and complete. Radiation oncologists from the sixcentres performing paediatric RT retrospectively reviewed patientswho had not received PT.Results: 354 treatments were given, 252 (71%) were not PT. Thereasons for choosing conventional RT were dismal prognosis in 66patients, uncertainty due to internal movement and lack of motioncontrol technique at the PT centre in 63 patients, and lack of dosimetric advantage for protons in 47 patients. Infrequent reasons were lackof set up for CSI or very superficial treatment in the beginning [n=11],variations of air in the field [n=6], not robust treatment plan for otherreasons (mainly metal in the field) [n=6], gamma-knife/other stereotactic treatment [n= 5], brachytherapy [n=4], TBI [n=31], acute RT startnecessary [n=10], and social reasons [n=3].Conclusions: Even though proton therapy, due to less side effects,has been advocated to be the best treatment modality for children, this might not always be the case. Sometimes conventional RTmay be advantageous, despite the increased exit-dose, due to thewider penumbra of PT. Social needs and palliative situations may bemore important than an optimal dose-distribution. However, technical improvement of PT by application of gating and treatment planningsystems (TPS) handling dose-deposition uncertainties should make themodality available for a wider range of paediatric patients.
  •  
31.
  • Taheri, M., et al. (författare)
  • AdAM : Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN Accelerators
  • 2024
  • Ingår i: Proceedings of the European Test Workshop. - : Institute of Electrical and Electronics Engineers (IEEE). - 9798350349320
  • Konferensbidrag (refereegranskat)abstract
    • Multiplication is the most resource-hungry operation in the neural network's processing elements. In this paper, we propose an architecture of a novel adaptive fault-tolerant approximate multiplier tailored for ASIC-based DNN accelerators. AdAM employs an adaptive adder relying on an unconventional use of the leading one position value of the inputs for fault detection through the optimization of unutilized adder resources. The proposed architecture uses a lightweight fault mitigation technique that sets the detected faulty bits to zero. The hardware resource utilization and the DNN accelerator's reliability metrics are used to compare the proposed solution against the triple modular redundancy (TMR) in multiplication, unprotected exact multiplication, and unprotected approximate multiplication. It is demonstrated that the proposed architecture enables a multiplication with a reliability level close to the multipliers protected by TMR utilizing 63.54% less area and having 39.06% lower power-delay product compared to the exact multiplier.
  •  
32.
  • Taheri, M., et al. (författare)
  • SAFFIRA : a Framework for Assessing the Reliability of Systolic-Array-Based DNN Accelerators
  • 2024
  • Ingår i: 2024 27th International Symposium on Design &amp; Diagnostics of Electronic Circuits &amp; Systems (DDECS). - : Institute of Electrical and Electronics Engineers Inc.. - 9798350359343 ; , s. 19-24
  • Konferensbidrag (refereegranskat)abstract
    • Systolic array has emerged as a prominent archi-tecture for Deep Neural Network (DNN) hardware accelerators, providing high-throughput and low-latency performance essen-tial for deploying DNNs across diverse applications. However, when used in safety-critical applications, reliability assessment is mandatory to guarantee the correct behavior of DNN accelerators. While fault injection stands out as a well-established practical and robust method for reliability assessment, it is still a very time-consuming process. This paper addresses the time efficiency issue by introducing a novel hierarchical software-based hardware-aware fault injection strategy tailored for systolic array-based DNN accelerators. The uniform Recurrent Equations system is used for software modeling of the systolic-array core of the DNN accelerators. The approach demonstrates a reduction of the fault injection time up to 3 × compared to the state-of-the-art hybrid (software/hardware) hardware-aware fault injection frameworks and more than 2000 × compared to RT-level fault injection frameworks - without compromising accuracy. Additionally, we propose and evaluate a new reliability metric through experimental assessment. The performance of the framework is studied on state-of-the-art DNN benchmarks.
  •  
33.
  • van Kampen, A. M., et al. (författare)
  • Boson-jet and jet-jet azimuthal correlations at high transverse momenta
  • 2022
  • Ingår i: Proceedings of Science. ; 414
  • Konferensbidrag (refereegranskat)abstract
    • We discuss our recent results on azimuthal distributions in vector boson + jets and multi-jet production at the LHC, obtained from the matching of next-to-leading order (NLO) perturbative matrix elements with transverse momentum dependent (TMD) parton branching. We present a comparative analysis of boson-jet and jet-jet correlations in the back-to-back region, and a study of the theoretical systematic uncertainties associated with the matching scale in the cases of TMD and collinear parton showers.
  •  
34.
  • Yang, H., et al. (författare)
  • Back-to-back azimuthal correlations in Z + jet events at high transverse momentum in the TMD parton branching method at next-to-leading order
  • 2022
  • Ingår i: European Physical Journal C. - : Springer Science and Business Media LLC. - 1434-6044 .- 1434-6052. ; 82:8
  • Tidskriftsartikel (refereegranskat)abstract
    • Azimuthal correlations in Z + jet production at large transverse momenta are computed by matching Parton-Branching (PB) TMD parton distributions and showers with NLO calculations via MCatNLO. The predictions are compared with those for dijet production in the same kinematic range. The azimuthal correlations Δ ϕ between the Z boson and the leading jet are steeper compared to those in dijet production at transverse momenta O(100) GeV , while they become similar for very high transverse momenta O(1000) GeV . The different patterns of Z + jet and dijet azimuthal correlations can be used to search for potential factorization-breaking effects in the back-to-back region, which depend on the different color and spin structure of the final states and their interferences with the initial states. In order to investigate these effects experimentally, we propose to measure the ratio of the distributions in Δ ϕ for Z + jet- and multijet production at low and at high transverse momenta, and compare the results to predictions obtained assuming factorization. We examine the role of theoretical uncertainties by performing variations of the factorization scale, renormalization scale and matching scale. In particular, we present a comparative study of matching scale uncertainties in the cases of PB-TMD and collinear parton showers.
  •  
35.
  • Al-Dulaimy, Auday, et al. (författare)
  • MultiScaler : A Multi-Loop Auto-Scaling Approach for Cloud-Based Applications
  • 2020
  • Ingår i: IEEE Transactions on Cloud Computing. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-7161.
  • Tidskriftsartikel (refereegranskat)abstract
    • Cloud computing offers a wide range of services through a pool of heterogeneous Physical Machines (PMs) hosted on cloud data centers, where each PM can host several Virtual Machines (VMs). Resource sharing among VMs comes with major benefits, but it can create technical challenges that have a detrimental effect on the performance. To ensure a specific service level requested by the cloud-based applications, there is a need for an approach to assign adequate resources to each VM. To this end, we present our novel Multi-Loop Control approach, called MultiScaler , to allocate resources to VMs based on the Service Level Agreement (SLA) requirements and the run-time conditions. MultiScaler is mainly composed of three different levels working closely with each other to achieve an optimal resource allocation. We propose a set of tailor-made controllers to monitor VMs and take actions accordingly to regulate contention among collocated VMs, to reallocate resources if required, and to migrate VMs from one PM to another. The evaluation in a VMware cluster have shown that the MultiScaler approach can meet applications performance goals and guarantee the SLA by assigning the exact resources that the applications require. Compared with sophisticated baselines, MultiScaler produces significantly better reaction to changes in workloads even under the presence of noisy neighbors.
  •  
36.
  • Aslanpour, M. S., et al. (författare)
  • AutoScaleSim : A simulation toolkit for auto-scaling Web applications in clouds
  • 2021
  • Ingår i: Simulation (San Diego, Calif.). - : Elsevier. - 1569-190X .- 1878-1462. ; 108
  • Tidskriftsartikel (refereegranskat)abstract
    • Auto-scaling of Web applications is an extensively investigated issue in cloud computing. To evaluate auto-scaling mechanisms, the cloud community is facing considerable challenges on either real cloud platforms or custom test-beds. Challenges include – but not limited to – deployment impediments, the complexity of setting parameters, and most importantly, the cost of hosting and testing Web applications on a massive scale. Hence, simulation is presently one of the most popular evaluation solutions to overcome these obstacles. Existing simulators, however, fail to provide support for hosting, deploying and subsequently auto-scaling of Web applications. In this paper, we introduce AutoScaleSim, which extends the existing CloudSim simulator, to support auto-scaling of Web applications in cloud environments in a customizable, extendable and scalable manner. Using AutoScaleSim, the cloud community can freely implement/evaluate policies for all four phases of auto-scaling mechanisms, that is, Monitoring, Analysis, Planning and Execution. AutoScaleSim can also be used for evaluating load balancing algorithms similarly. We conducted a set of experiments to validate and carefully evaluate the performance of AutoScaleSim in a real cloud platform, with a wide range of performance metrics.
  •  
37.
  • Calvo, J.C., et al. (författare)
  • A Method to Improve the Accuracy of Protein Torsion Angles
  • 2011
  • Ingår i: International Conference on Bioinformatics Models, Methods and Algorithms (Bioinformatics-2011). - Rome, Italy : SciTePress. - 9789898425362 ; , s. 297-300
  • Konferensbidrag (refereegranskat)
  •  
38.
  • Galletta, A., et al. (författare)
  • On the applicability of secret share algorithms for saving data on iot, edge and cloud devices
  • 2019
  • Ingår i: Proceedings - 2019 IEEE International Congress on Cybermatics. - : IEEE. - 9781728129808 ; , s. 14-21
  • Konferensbidrag (refereegranskat)abstract
    • A common practice to store data is to use remote Cloud-based storage systems. However, storing files in remote services can arise privacy and security issues, for example, they can be attacked or even discontinued. A possible solution to solve this problem is to split files into chunks and add redundancy by means of Secret Share techniques. When it comes to Internet of Things (IoT), Edge and Cloud environments, these techniques have not been evaluated for the purpose of storing files. This work aims to address this issue by evaluating two of the most common Secret Share algorithms in order to identify their suitability for different environments, while considering the size of the file and the availability of resources. In particular, we analysed Shamir's Secret Share schema and the Redundant Residue Number System (RRNS) to gauge their efficiency regarding storage requirement and execution time. We made our experiments for different file sizes (from 1kB up to 500MB), number of parallel threads (1 to 4) and data redundancy (0 to 7) in all aforementioned environments. Results were promising and showed that, for example, to have seven degrees of redundancy, Shamir uses eight times more storage than RRNS; or, Shamir is faster than RRNS for small files (up to 20 kB). We also discovered that the environment on which the computation should be performed depends on both file size and algorithm. For instance, when employing RRNS, files up to 500kB can be processed on the IoT, up to 50MB on the Edge, and beyond that on the Cloud; whereas, in Shamir's schema, the threshold to move the computation from the IoT to the Edge is about 50kB, and from the Edge to the Cloud is about 500kB.
  •  
39.
  • Geerts, Jaason M., et al. (författare)
  • Guidance for Health Care Leaders During the Recovery Stage of the COVID-19 Pandemic A Consensus Statement
  • 2021
  • Ingår i: JAMA Network Open. - : American Medical Association. - 2574-3805. ; 4:7
  • Tidskriftsartikel (refereegranskat)abstract
    • IMPORTANCE: The COVID-19 pandemic is the greatest global test of health leadership of our generation. There is an urgent need to provide guidance for leaders at all levels during the unprecedented preresolution recovery stage.OBJECTIVE: To create an evidence- and expertise-informed framework of leadership imperatives to serve as a resource to guide health and public health leaders during the postemergency stage of the pandemic.EVIDENCE REVIEW: A literature search in PubMed, MEDLINE, and Embase revealed 10 910 articles published between 2000 and 2021 that included the terms leadership and variations of emergency, crisis, disaster, pandemic, COVID-19, or public health. Using the Standards for Quality Improvement Reporting Excellence reporting guideline for consensus statement development, this assessment adopted a 6-round modified Delphi approach involving 32 expert coauthors from 17 countries who participated in creating and validating a framework outlining essential leadership imperatives.FINDINGS: The 10 imperatives in the framework are; (1) acknowledge staff and celebrate successes; (2) provide support for staff well-being; (3) develop a clear understanding of the current local and global context, along with informed projections; (4) prepare for future emergencies (personnel, resources, protocols, contingency plans, coalitions, and training); (5) reassess priorities explicitly and regularly and provide purpose, meaning, and direction; (6) maximize team, organizational, and system performance and discuss enhancements; (7) manage the backlog of paused services and consider improvements while avoiding burnout and moral distress; (8) sustain learning, innovations, and collaborations, and imagine future possibilities; (9) provide regular communication and engender trust; and (10) in consultation with public health and fellow leaders, provide safety information and recommendations to government, other organizations, staff, and the community to improve equitable and integrated care and emergency preparedness systemwide.CONCLUSIONS AND RELEVANCE: Leaders who most effectively implement these imperatives are ideally positioned to address urgent needs and inequalities in health systems and to cocreate with their organizations a future that best serves stakeholders and communities.
  •  
40.
  • Hart, James L., et al. (författare)
  • Electron-beam-induced ferroelectric domain behavior in the transmission electron microscope : Toward deterministic domain patterning
  • 2016
  • Ingår i: Physical Review B. - 2469-9950 .- 2469-9969. ; 94:17
  • Tidskriftsartikel (refereegranskat)abstract
    • We report on transmission electron microscope beam-induced ferroelectric domain nucleation and motion. While previous observations of this phenomenon have been reported, a consistent theory explaining induced domain response is lacking, and little control over domain behavior has been demonstrated. We identify positive sample charging, a result of Auger and secondary electron emission, as the underlying mechanism driving domain behavior. By converging the electron beam to a focused probe, we demonstrate controlled nucleation of nanoscale domains. Molecular dynamics simulations performed are consistent with experimental results, confirming positive sample charging and reproducing the result of controlled domain nucleation. Furthermore, we discuss the effects of sample geometry and electron irradiation conditions on induced domain response. These findings elucidate past reports of electron beam-induced domain behavior in the transmission electron microscope and provide a path towards more predictive, deterministic domain patterning through electron irradiation.
  •  
41.
  • Hoseiny Farahabady, M. Reza, et al. (författare)
  • Data-Intensive Workload Consolidation in Serverless (Lambda/FaaS) Platforms
  • 2021
  • Ingår i: 2021 IEEE 20TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665495509 ; , s. 1-8
  • Konferensbidrag (refereegranskat)abstract
    • A significant amount of research studies in the past years has been devoted on developing efficient mechanisms to control the level of degradation among consolidate workloads in a shared platform. Workload consolidation is a promising feature that is employed by most service providers to reduce the total operating costs in traditional computing systems [1]-[3]. Serverless paradigm - also known as Function as a Service, FaaS, and Lambda - recently emerged as a new virtualization run-time model that disentangles the traditional state of applications' users from the burden of provisioning physical computing resources, leaving the difficulty of providing the adequate resource capacity on the service provider's side. This paper focuses on a number of challenges associated with workload consolidation when a serverless platform is expected to execute several data-intensive functional units. Each functional unit is considered to be the atomic component that reacts to a stream of input data. A serverless application in the proposed model is composed of a series of functional units. Through a systematic approach, we highlight the main challenges for devising an efficient workload consolidation process in a data-intensive serverless platform. To this end, we first study the performance interference among multiple workloads to obtain the capacity of last level cache (LLC). We show how such contention among workloads can lead to a significant throughput degradation on a single physical server. We expand our investigation into a general case with the aim to prevent the total throughput never falling below a predefined utilization level. Based on the empirical results, we develop a consolidation model and then design a computationally efficient controller to optimize the throughput degradation among a platform consists fs multiple machines. The performance evaluation is conducted using modern workloads inspired by data management services, and data analytic benchmark tools in our in-house four node platform showing the efficiency of the proposed solution to mitigate the QoS violation rate for high priority applications by 90% while can enhance the normalized throughput usage of disk devices by 39%.
  •  
42.
  • Hoseinyfarahabady, M. R., et al. (författare)
  • A Dynamic Resource Controller for Resolving Quality of Service Issues in Modern Streaming Processing Engines
  • 2020
  • Ingår i: 2020 IEEE 19th International Symposium on Network Computing and Applications, NCA 2020. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781728183268
  • Konferensbidrag (refereegranskat)abstract
    • Devising an elastic resource allocation controller of data analytical applications in virtualized data-center has received a great attention recently, mainly due to the fact that even a slight performance improvement can translate to huge monetary savings in practical large-scale execution. Apache Flink is among modern streamed data processing run-times that can provide both low latency and high throughput computation in to execute processing pipelines over high-volume and high-velocity data-items under tight latency constraints. However, a yet to be answered challenge in a large-scale platform with tens of worker nodes is how to resolve the run-time violation in the quality of service (QoS) level in a multi-tenant data streaming platforms, particularly when the amount of workload generated by different users fluctuates. Studies showed that a static resource allocation algorithm (round-robin), which is used by default in Apache Flink, suffer from lack of responsiveness to sudden traffic surges happening unpredictably during the run-time. In this paper, we address the problem of resource management in a Flink platform for ensuring different QoS enforcement levels in a platform with shared computing resources. The proposed solution applies theoretical principals borrowed from close-loop control theory to design a CPU and memory adjustment mechanism with the primary goal to fulfill the different QoS levels requested by submitted applications while the resource interference is considered as the critical performance-limiting factor. The performance evaluation is carried out by comparing the proposed resource allocation mechanism with two static heuristics (round robin and class-based weighted fair queuing) in a 80-core cluster under multiple traffic patterns resembling sudden changes in the incoming workloads of low-priory streaming applications. The experimental results confirm the stability of the proposed controller to regulate the underlying platform resources to smoothly follow the target values (QoS violation rates). Particularly, the proposed solution can achieve higher efficiency compared to the other heuristics by reducing the response-time of high priority applications by 53% while maintaining the enforced QoS levels during the burst traffic periods.
  •  
43.
  • Hoseinyfarahabady, M. R., et al. (författare)
  • Auto-tuning of large-scale iterative operations on modern streaming platforms
  • 2020
  • Ingår i: CoNEXT 2020 - Proceedings of the 16th International Conference on Emerging Networking EXperiments and Technologies. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450379489 ; , s. 554-555
  • Konferensbidrag (refereegranskat)abstract
    • As more analytical applications today require real-time processing over high volume data streams, finding an optimal implementation of traditional algorithms which possess iterative computations are gaining popularity and become crucial in most commercial contexts, particularly in edge processing and cloud applications. In this work, we propose an auto-tuning mechanism for enhancing the run-time performance of real-world iterative and cyclic stream processing applications (Multi-Join Operation as the study case) to correctly adjust the right performance bounds for workloads with different characteristics and data-sizes running on modern streaming data processing platform.
  •  
44.
  • HoseinyFarahabady, M. R., et al. (författare)
  • Graceful Performance Degradation in Apache Storm
  • 2021
  • Ingår i: Parallel and Distributed Computing, Applications and Technologies. - Cham : Springer Science+Business Media B.V.. - 9783030692438 ; , s. 389-400
  • Konferensbidrag (refereegranskat)abstract
    • The concept of stream data processing is becoming challenging in most business sectors where try to improve their operational efficiency by deriving valuable information from unstructured, yet, contentiously generated high volume raw data in an expected time spans. A modern streamlined data processing platform is required to execute analytical pipelines over a continues flow of data-items that might arrive in a high rate. In most cases, the platform is also expected to dynamically adapt to dynamic characteristics of the incoming traffic rates and the ever-changing condition of underlying computational resources while fulfill the tight latency constraints imposed by the end-users. Apache Storm has emerged as an important open source technology for performing stream processing with very tight latency constraints over a cluster of computing nodes. To increase the overall resource utilization, however, the service provider might be tempted to use a consolidation strategy to pack as many applications as possible in a (cloud-centric) cluster with limited number of working nodes. However, collocated applications can negatively compete with each other, for obtaining the resource capacity in a shared platform that, in turn, the result may lead to a severe performance degradation among all running applications. The main objective of this work is to develop an elastic solution in a modern stream processing ecosystem, for addressing the shared resource contention problem among collocated applications. We propose a mechanism, based on design principles of Model Predictive Control theory, for coping with the extreme conditions in which the collocated analytical applications have different quality of service (QoS) levels while the shared-resource interference is considered as a key performance limiting parameter. Experimental results confirm that the proposed controller can successfully enhance the p -99 latency of high priority applications by 67%, compared to the default round robin resource allocation strategy in Storm, during the high traffic load, while maintaining the requested quality of service levels.
  •  
45.
  • HoseinyFarahabady, M. Reza, et al. (författare)
  • Low Latency Execution Guarantee Under Uncertainty in Serverless Platforms
  • 2022
  • Ingår i: Parallel and Distributed Computing, Applications and Technologies. PDCAT 2021. - Cham : Springer. - 9783030967727 - 9783030967710 ; , s. 324-335
  • Konferensbidrag (refereegranskat)abstract
    • Serverless computing recently emerged as a new run-time paradigm to disentangle the client from the burden of provisioning physical computing resources, leaving such difficulty on the service provider's side. However, an unsolved problem in such an environment is how to cope with the challenges of executing several co-running applications while fulfilling the requested Quality of Service (QoS) level requested by all application owners. In practice, developing an efficient mechanism to reach the requested performance level (such as p-99 latency and throughput) is limited to the awareness (resource availability, performance interference among consolidation workloads, etc.) of the controller about the dynamics of the underlying platforms. In this paper, we develop an adaptive feedback controller for coping with the buffer instability of serverless platforms when several collocated applications are run in a shared environment. The goal is to support a low-latency execution by managing the arrival event rate of each application when shared resource contention causes a significant throughput degradation among workloads with different priorities. The key component of the proposed architecture is a continues management of server-side internal buffers for each application to provide a low-latency feedback control mechanism based on the requested QoS level of each application (e.g., buffer information) and the worker nodes throughput. The empirical results confirm the response stability for high priority workloads when a dynamic condition is caused by low priority applications. We evaluate the performance of the proposed solution with respect to the response time and the QoS violation rate for high priority applications in a serverless platform with four worker nodes set up in our in-house virtualized cluster. We compare the proposed architecture against the default resource management policy in Apache OpenWhisk which is extensively used in commercial serverless platforms. The results show that our approach achieves a very low overhead (less than 0.7%) while it can improve the p-99 latency of high priority applications by 64%, on average, in the presence of dynamic high traffic conditions.
  •  
46.
  • HoseinyFarahabady, M. Reza, et al. (författare)
  • QSpark : Distributed Execution of Batch & Streaming Analytics in Spark Platform
  • 2021
  • Ingår i: 2021 IEEE 20TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA). - : IEEE. - 9781665495509
  • Konferensbidrag (refereegranskat)abstract
    • A significant portion of research work in the past decade has been devoted on developing resource allocation and task scheduling solutions for large-scale data processing platforms. Such algorithms are designed to facilitate deployment of data analytic applications across either conventional cluster computing systems or modern virtualized data-centers. The main reason for such a huge research effort stems from the fact that even a slight improvement in the performance of such platforms can bring a considerable monetary savings for vendors, especially for modern data processing engines that are designed solely to perform high throughput or/and low-latency computations over massive-scale batch or streaming data. A challenging question to be yet answered in such a context is to design an effective resource allocation solution that can prevent low resource utilization while meeting the enforced performance level (such as 99-th latency percentile) in circumstances where contention among applications to obtain the capacity of shared resources is a non negligible performance-limiting parameter. This paper proposes a resource controller system, called QSpark, to cope with the problem of (i) low performance (i.e., resource utilization in the batch mode and p-99 response time in the streaming mode), and (ii) the shared resource interference among collocated applications in a multi-tenancy modern Spark platform. The proposed solution leverages a set of controlling mechanisms for dynamic partitioning of the allocation of computing resources, in a way that it can fulfill the QoS requirements of latency-critical data processing applications, while enhancing the throughput for all working nodes without reaching their saturation points. Through extensive experiments in our in-house Spark cluster, we compared the achieved performance of proposed solution against the default Spark resource allocation policy for a variety of Machine Learning (ML), Artificial Intelligence (AI), and Deep Learning (DL) applications. Experimental results show the effectiveness of the proposed solution by reducing the p-99 latency of high priority applications by 32% during the burst traffic periods (for both batch and stream modes), while it can enhance the QoS satisfaction level by 65% for applications with the highest priority (compared with the results of default Spark resource allocation strategy).
  •  
47.
  • Hoseinyfarahabady, M. R., et al. (författare)
  • Spark-Tuner : An elastic auto-tuner for apache spark streaming
  • 2020
  • Ingår i: IEEE International Conference on Cloud Computing, CLOUD. - : IEEE Computer Society. - 9781728187808 ; , s. 544-548
  • Konferensbidrag (refereegranskat)abstract
    • Spark has emerged as one of the most widely and successfully used data analytical engine for large-scale enterprise, mainly due to its unique characteristics that facilitate computations to be scaled out in a distributed environment. This paper deals with the performance degradation due to resource contention among collocated analytical applications with different priority and dissimilar intrinsic characteristics in a shared Spark platform. We propose an auto-tuning strategy of computing resources in a distributed Spark platform for handling scenarios in which submitted analytical applications have different quality of service (QoS) requirements (e.g., latency constraints), while the interference among computing resources is considered as a key performance-limiting parameter. We compared Spark-Tuner to two widely used resource allocation heuristics in a large scale Spark cluster through extensive experimental settings across several traffic patterns with uncertain rate and application types. Experimental results show that with Spark-Tuner, the Spark engine can decrease the $p$-99 latency of high priority applications by 43% during the high-rate traffic periods, while maintaining the same level of CPU throughput across a cluster.
  •  
48.
  • Hoseinyfarahabady, M. Reza, et al. (författare)
  • Toward designing a dynamic CPU cap manager for timely dataflow platforms
  • 2018
  • Ingår i: HPC '18 Proceedings of the High Performance Computing Symposium. - : Association for Computing Machinery (ACM). - 9781510860162 ; , s. 60-70
  • Konferensbidrag (refereegranskat)abstract
    • In this work, we propose a control-based solution for the problem of CPU resource allocation in data-flow platform that considers the degradation of performance caused by running concurrent data-flow processes. Our aim is to cut the QoS violation incidents for applications belonging to the highest QoS class. The performance of the proposed solution is bench-marked with the famous round robin algorithm. The experimental results confirms that the proposed algorithm can decrease the latency of processing data records for applications by 48% compared to the round robin policy.
  •  
49.
  • Reza Hoseinyfarahabady, M., et al. (författare)
  • Q-Flink : A QoS-Aware Controller for Apache Flink
  • 2020
  • Ingår i: Proceedings - 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2020. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781728160955 ; , s. 629-638
  • Konferensbidrag (refereegranskat)abstract
    • Modern stream-data processing platforms are required to execute processing pipelines over high-volume, yet high-velocity, datasets under tight latency constraints. Apache Flink has emerged as an important new technology of large-scale platform that can distribute processing over a large number of computing nodes in a cluster (i.e., scale-out processing). Flink allows application developers to design and execute queries over continuous raw-inputs to analyze a large amount of streaming data in a parallel and distributed fashion. To increase the throughput of computing resources in stream processing platforms, a service provider might be tempted to use a consolidation strategy to pack as many processing applications as possible on the working nodes, with the hope of increasing the total revenue by improving the overall resource utilization. However, there is a hidden trap for achieving such a higher throughput solely by relying on an interference-oblivious consolidation strategy. In practice, collocated applications in a shared platform can fiercely compete with each others for obtaining the capacity of shared resources (e.g., cache and memory bandwidth) which in turn can lead to a severe performance degradation for all consolidated workloads.This paper addresses the shared resource contention problem associated with the auto-resource controlling mechanism of Apache Flink engine running across a distributed cluster. A controlling strategy is proposed to handle scenarios in which stream processing applications may have different quality of service (QoS) requirements while the resource interference is considered as the key performance-limiting parameter. The performance evaluation is carried out by comparing the proposed controller with the default Flink resource allocation strategy in a testbed cluster with total 32 Intel Xeon cores under different workload traffic with up to 4000 streaming applications chosen from various benchmarking tools. Experimental results demonstrate that the proposed controller can successfully decrease the average latency of high priority applications by 223% during the burst traffic while maintaining the requested QoS enforcement levels.
  •  
50.
  • Sotoodeh, M. S., et al. (författare)
  • Preserved action recognition in children with autism spectrum disorders: Evidence from an EEG and eye-tracking study
  • 2021
  • Ingår i: Psychophysiology. - : Wiley. - 0048-5772 .- 1469-8986. ; 58:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Individuals with Autism Spectrum Disorder (ASD) have difficulties recognizing and understanding others' actions. The goal of the present study was to determine whether children with and without ASD show differences in the way they process stimuli depicting Biological Motion (BM). Thirty-two children aged 7-16 (16 ASD and 16 typically developing (TD) controls) participated in two experiments. In the first experiment, electroencephalography (EEG) was used to record low (8-10 Hz) and high (10-13 Hz) mu and beta (15-25 Hz) bands during the observation three different Point Light Displays (PLD) of action. In the second experiment, participants answered to action-recognition tests and their accuracy and response time were recorded, together with their eye-movements. There were no group differences in EEG data (first experiment), indicating that children with and without ASD do not differ in their mu suppression (8-13 Hz) and beta activity (15-25 Hz). However, behavioral data from second experiment revealed that children with ASD were less accurate and slower than TD children in their responses to an action recognition task. In addition, eye-tracking data indicated that children with ASD paid less attention to the body compared to the background when watching PLD stimuli. Our results indicate that the more the participants focused on the PLDs, the more they displayed mu suppressions. These results could challenge the results of previous studies that had not controlled for visual attention and found a possible deficit in MNS functions of individuals with ASD. We discuss possible mechanisms and interpretations.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 52
Typ av publikation
tidskriftsartikel (36)
konferensbidrag (16)
Typ av innehåll
refereegranskat (51)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Taheri, M. (18)
Arabloo, J (16)
Momtazmanesh, S. (16)
Ahmad, S. (15)
Rezaei, N (15)
Fischer, F (15)
visa fler...
Hosseinzadeh, M (15)
Mestrovic, T (15)
Rawaf, S (15)
Saddik, B (15)
Sahebkar, A (15)
Elhadi, M (15)
Rahman, M (15)
Azadnajafabad, S. (15)
Dadras, O. (15)
Ekholuenetale, M. (15)
Goleij, P. (15)
Joseph, N. (15)
Khajuria, H. (15)
Golechha, M (14)
Monasta, L (14)
Almustanyir, S. (14)
Keykhaei, M. (14)
Gupta, S. (13)
Abbasi-Kangevari, M (13)
Foroutan, M (13)
Holla, R (13)
Mohammed, S (13)
Oancea, B (13)
Taheri, Javid (13)
Abidi, H. (13)
Barrow, A. (13)
Halwani, R. (13)
Kandel, H. (13)
Alahdab, F (12)
Alipour, V (12)
Bhardwaj, P (12)
Hayat, K (12)
Joukar, F (12)
Kabir, A (12)
Kalhor, R (12)
Landires, I (12)
Naghavi, M (12)
Sathian, B (12)
Yonemoto, N (12)
Ahmadi, S (12)
Al Hamad, H. (12)
Heidari, M. (12)
Mirmoeeni, S. (12)
Mubarik, S. (12)
visa färre...
Lärosäte
Karolinska Institutet (16)
Karlstads universitet (13)
Göteborgs universitet (12)
Mälardalens universitet (5)
Lunds universitet (5)
Umeå universitet (2)
visa fler...
Kungliga Tekniska Högskolan (1)
Uppsala universitet (1)
Högskolan Väst (1)
Örebro universitet (1)
Chalmers tekniska högskola (1)
Högskolan Dalarna (1)
visa färre...
Språk
Engelska (52)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (21)
Medicin och hälsovetenskap (15)
Teknik (5)
Humaniora (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy