SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Vlassov Vladimir) "

Sökning: WFRF:(Vlassov Vladimir)

  • Resultat 1-50 av 178
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Lozano, Rafael, et al. (författare)
  • Measuring progress from 1990 to 2017 and projecting attainment to 2030 of the health-related Sustainable Development Goals for 195 countries and territories: a systematic analysis for the Global Burden of Disease Study 2017
  • 2018
  • Ingår i: The Lancet. - : Elsevier. - 1474-547X .- 0140-6736. ; 392:10159, s. 2091-2138
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Efforts to establish the 2015 baseline and monitor early implementation of the UN Sustainable Development Goals (SDGs) highlight both great potential for and threats to improving health by 2030. To fully deliver on the SDG aim of “leaving no one behind”, it is increasingly important to examine the health-related SDGs beyond national-level estimates. As part of the Global Burden of Diseases, Injuries, and Risk Factors Study 2017 (GBD 2017), we measured progress on 41 of 52 health-related SDG indicators and estimated the health-related SDG index for 195 countries and territories for the period 1990–2017, projected indicators to 2030, and analysed global attainment. Methods: We measured progress on 41 health-related SDG indicators from 1990 to 2017, an increase of four indicators since GBD 2016 (new indicators were health worker density, sexual violence by non-intimate partners, population census status, and prevalence of physical and sexual violence [reported separately]). We also improved the measurement of several previously reported indicators. We constructed national-level estimates and, for a subset of health-related SDGs, examined indicator-level differences by sex and Socio-demographic Index (SDI) quintile. We also did subnational assessments of performance for selected countries. To construct the health-related SDG index, we transformed the value for each indicator on a scale of 0–100, with 0 as the 2·5th percentile and 100 as the 97·5th percentile of 1000 draws calculated from 1990 to 2030, and took the geometric mean of the scaled indicators by target. To generate projections through 2030, we used a forecasting framework that drew estimates from the broader GBD study and used weighted averages of indicator-specific and country-specific annualised rates of change from 1990 to 2017 to inform future estimates. We assessed attainment of indicators with defined targets in two ways: first, using mean values projected for 2030, and then using the probability of attainment in 2030 calculated from 1000 draws. We also did a global attainment analysis of the feasibility of attaining SDG targets on the basis of past trends. Using 2015 global averages of indicators with defined SDG targets, we calculated the global annualised rates of change required from 2015 to 2030 to meet these targets, and then identified in what percentiles the required global annualised rates of change fell in the distribution of country-level rates of change from 1990 to 2015. We took the mean of these global percentile values across indicators and applied the past rate of change at this mean global percentile to all health-related SDG indicators, irrespective of target definition, to estimate the equivalent 2030 global average value and percentage change from 2015 to 2030 for each indicator. Findings: The global median health-related SDG index in 2017 was 59·4 (IQR 35·4–67·3), ranging from a low of 11·6 (95% uncertainty interval 9·6–14·0) to a high of 84·9 (83·1–86·7). SDG index values in countries assessed at the subnational level varied substantially, particularly in China and India, although scores in Japan and the UK were more homogeneous. Indicators also varied by SDI quintile and sex, with males having worse outcomes than females for non-communicable disease (NCD) mortality, alcohol use, and smoking, among others. Most countries were projected to have a higher health-related SDG index in 2030 than in 2017, while country-level probabilities of attainment by 2030 varied widely by indicator. Under-5 mortality, neonatal mortality, maternal mortality ratio, and malaria indicators had the most countries with at least 95% probability of target attainment. Other indicators, including NCD mortality and suicide mortality, had no countries projected to meet corresponding SDG targets on the basis of projected mean values for 2030 but showed some probability of attainment by 2030. For some indicators, including child malnutrition, several infectious diseases, and most violence measures, the annualised rates of change required to meet SDG targets far exceeded the pace of progress achieved by any country in the recent past. We found that applying the mean global annualised rate of change to indicators without defined targets would equate to about 19% and 22% reductions in global smoking and alcohol consumption, respectively; a 47% decline in adolescent birth rates; and a more than 85% increase in health worker density per 1000 population by 2030. Interpretation: The GBD study offers a unique, robust platform for monitoring the health-related SDGs across demographic and geographic dimensions. Our findings underscore the importance of increased collection and analysis of disaggregated data and highlight where more deliberate design or targeting of interventions could accelerate progress in attaining the SDGs. Current projections show that many health-related SDG indicators, NCDs, NCD-related risks, and violence-related indicators will require a concerted shift away from what might have driven past gains—curative interventions in the case of NCDs—towards multisectoral, prevention-oriented policy action and investments to achieve SDG aims. Notably, several targets, if they are to be met by 2030, demand a pace of progress that no country has achieved in the recent past. The future is fundamentally uncertain, and no model can fully predict what breakthroughs or events might alter the course of the SDGs. What is clear is that our actions—or inaction—today will ultimately dictate how close the world, collectively, can get to leaving no one behind by 2030.
  •  
2.
  • Murray, Christopher J. L., et al. (författare)
  • Population and fertility by age and sex for 195 countries and territories, 1950–2017: a systematic analysis for the Global Burden of Disease Study 2017
  • 2018
  • Ingår i: The Lancet. - 1474-547X .- 0140-6736. ; 392:10159, s. 1995-2051
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Population estimates underpin demographic and epidemiological research and are used to track progress on numerous international indicators of health and development. To date, internationally available estimates of population and fertility, although useful, have not been produced with transparent and replicable methods and do not use standardised estimates of mortality. We present single-calendar year and single-year of age estimates of fertility and population by sex with standardised and replicable methods. Methods: We estimated population in 195 locations by single year of age and single calendar year from 1950 to 2017 with standardised and replicable methods. We based the estimates on the demographic balancing equation, with inputs of fertility, mortality, population, and migration data. Fertility data came from 7817 location-years of vital registration data, 429 surveys reporting complete birth histories, and 977 surveys and censuses reporting summary birth histories. We estimated age-specific fertility rates (ASFRs; the annual number of livebirths to women of a specified age group per 1000 women in that age group) by use of spatiotemporal Gaussian process regression and used the ASFRs to estimate total fertility rates (TFRs; the average number of children a woman would bear if she survived through the end of the reproductive age span [age 10–54 years] and experienced at each age a particular set of ASFRs observed in the year of interest). Because of sparse data, fertility at ages 10–14 years and 50–54 years was estimated from data on fertility in women aged 15–19 years and 45–49 years, through use of linear regression. Age-specific mortality data came from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 estimates. Data on population came from 1257 censuses and 761 population registry location-years and were adjusted for underenumeration and age misreporting with standard demographic methods. Migration was estimated with the GBD Bayesian demographic balancing model, after incorporating information about refugee migration into the model prior. Final population estimates used the cohort-component method of population projection, with inputs of fertility, mortality, and migration data. Population uncertainty was estimated by use of out-of-sample predictive validity testing. With these data, we estimated the trends in population by age and sex and in fertility by age between 1950 and 2017 in 195 countries and territories. Findings: From 1950 to 2017, TFRs decreased by 49·4% (95% uncertainty interval [UI] 46·4–52·0). The TFR decreased from 4·7 livebirths (4·5–4·9) to 2·4 livebirths (2·2–2·5), and the ASFR of mothers aged 10–19 years decreased from 37 livebirths (34–40) to 22 livebirths (19–24) per 1000 women. Despite reductions in the TFR, the global population has been increasing by an average of 83·8 million people per year since 1985. The global population increased by 197·2% (193·3–200·8) since 1950, from 2·6 billion (2·5–2·6) to 7·6 billion (7·4–7·9) people in 2017; much of this increase was in the proportion of the global population in south Asia and sub-Saharan Africa. The global annual rate of population growth increased between 1950 and 1964, when it peaked at 2·0%; this rate then remained nearly constant until 1970 and then decreased to 1·1% in 2017. Population growth rates in the southeast Asia, east Asia, and Oceania GBD super-region decreased from 2·5% in 1963 to 0·7% in 2017, whereas in sub-Saharan Africa, population growth rates were almost at the highest reported levels ever in 2017, when they were at 2·7%. The global average age increased from 26·6 years in 1950 to 32·1 years in 2017, and the proportion of the population that is of working age (age 15–64 years) increased from 59·9% to 65·3%. At the national level, the TFR decreased in all countries and territories between 1950 and 2017; in 2017, TFRs ranged from a low of 1·0 livebirths (95% UI 0·9–1·2) in Cyprus to a high of 7·1 livebirths (6·8–7·4) in Niger. The TFR under age 25 years (TFU25; number of livebirths expected by age 25 years for a hypothetical woman who survived the age group and was exposed to current ASFRs) in 2017 ranged from 0·08 livebirths (0·07–0·09) in South Korea to 2·4 livebirths (2·2–2·6) in Niger, and the TFR over age 30 years (TFO30; number of livebirths expected for a hypothetical woman ageing from 30 to 54 years who survived the age group and was exposed to current ASFRs) ranged from a low of 0·3 livebirths (0·3–0·4) in Puerto Rico to a high of 3·1 livebirths (3·0–3·2) in Niger. TFO30 was higher than TFU25 in 145 countries and territories in 2017. 33 countries had a negative population growth rate from 2010 to 2017, most of which were located in central, eastern, and western Europe, whereas population growth rates of more than 2·0% were seen in 33 of 46 countries in sub-Saharan Africa. In 2017, less than 65% of the national population was of working age in 12 of 34 high-income countries, and less than 50% of the national population was of working age in Mali, Chad, and Niger. Interpretation: Population trends create demographic dividends and headwinds (ie, economic benefits and detriments) that affect national economies and determine national planning needs. Although TFRs are decreasing, the global population continues to grow as mortality declines, with diverse patterns at the national level and across age groups. To our knowledge, this is the first study to provide transparent and replicable estimates of population and fertility, which can be used to inform decision making and to monitor progress. Funding: Bill & Melinda Gates Foundation.
  •  
3.
  • Stanaway, Jeffrey D., et al. (författare)
  • Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990-2017: A systematic analysis for the Global Burden of Disease Study 2017
  • 2018
  • Ingår i: The Lancet. - 1474-547X .- 0140-6736. ; 392:10159, s. 1923-1994
  • Tidskriftsartikel (refereegranskat)abstract
    • Background The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 comparative risk assessment (CRA) is a comprehensive approach to risk factor quantification that offers a useful tool for synthesising evidence on risks and risk-outcome associations. With each annual GBD study, we update the GBD CRA to incorporate improved methods, new risks and risk-outcome pairs, and new data on risk exposure levels and risk- outcome associations. Methods We used the CRA framework developed for previous iterations of GBD to estimate levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs), by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017. This study included 476 risk-outcome pairs that met the GBD study criteria for convincing or probable evidence of causation. We extracted relative risk and exposure estimates from 46 749 randomised controlled trials, cohort studies, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. Using the counterfactual scenario of theoretical minimum risk exposure level (TMREL), we estimated the portion of deaths and DALYs that could be attributed to a given risk. We explored the relationship between development and risk exposure by modelling the relationship between the Socio-demographic Index (SDI) and risk-weighted exposure prevalence and estimated expected levels of exposure and risk-attributable burden by SDI. Finally, we explored temporal changes in risk-attributable DALYs by decomposing those changes into six main component drivers of change as follows: (1) population growth; (2) changes in population age structures; (3) changes in exposure to environmental and occupational risks; (4) changes in exposure to behavioural risks; (5) changes in exposure to metabolic risks; and (6) changes due to all other factors, approximated as the risk-deleted death and DALY rates, where the risk-deleted rate is the rate that would be observed had we reduced the exposure levels to the TMREL for all risk factors included in GBD 2017.
  •  
4.
  • Abbas, Zainab, et al. (författare)
  • Evaluation of the use of streaming graph processing algorithms for road congestion detection
  • 2018
  • Ingår i: Proceedings - 16th IEEE International Symposium on Parallel and Distributed Processing with Applications, 17th IEEE International Conference on Ubiquitous Computing and Communications, 8th IEEE International Conference on Big Data and Cloud Computing, 11th IEEE International Conference on Social Computing and Networking and 8th IEEE International Conference on Sustainable Computing and Communications, ISPA/IUCC/BDCloud/SocialCom/SustainCom 2018. - : Institute of Electrical and Electronics Engineers Inc.. - 9781728111414 ; , s. 1017-1025
  • Konferensbidrag (refereegranskat)abstract
    • Real-time road congestion detection allows improving traffic safety and route planning. In this work, we propose to use streaming graph processing algorithms for road congestion detection and evaluate their accuracy and performance. We represent road infrastructure sensors in the form of a directed weighted graph and adapt the Connected Components algorithm and some existing graph processing algorithms, originally used for community detection in social network graphs, for the task of road congestion detection. In our approach, we detect Connected Components or communities of sensors with similarly weighted edges that reflect different states in the traffic, e.g., free flow or congested state, in regions covered by detected sensor groups. We have adapted and implemented the Connected Components and community detection algorithms for detecting groups in the weighted sensor graphs in batch and streaming manner. We evaluate our approach by building and processing the road infrastructure sensor graph for Stockholm's highways using real-world data from the Motorway Control System operated by the Swedish traffic authority. Our results indicate that the Connected Components and DenGraph community detection algorithms can detect congestion with accuracy up to ? 94% for Connected Components and up to ? 88% for DenGraph. The Louvain Modularity algorithm for community detection fails to detect congestion regions for sparsely connected graphs, representing roads that we have considered in this study. The Hierarchical Clustering algorithm using speed and density readings is able to detect congestion without details, such as shockwaves.
  •  
5.
  • Abbas, Zainab (författare)
  • Scalable Streaming Graph and Time Series Analysis Using Partitioning and Machine Learning
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Recent years have witnessed a massive increase in the amount of data generated by the Internet of Things (IoT) and social media. Processing huge amounts of this data poses non-trivial challenges in terms of the hardware and performance requirements of modern-day applications. The data we are dealing with today is of massive scale, high intensity and comes in various forms. MapReduce was a popular and clever choice of handling big data using a distributed programming model, which made the processing of huge volumes of data possible using clusters of commodity machines. However, MapReduce was not a good fit for performing complex tasks, such as graph processing, iterative programs and machine learning. Modern data processing frameworks, that are being popularly used to process complex data and perform complex analysis tasks, overcome the shortcomings of MapReduce. Some of these popular frameworks include Apache Spark for batch and stream processing, Apache Flink for stream processing and Tensor Flow for machine learning.In this thesis, we deal with complex analytics on data modeled as time series, graphs and streams. Time series are commonly used to represent temporal data generated by IoT sensors. Analysing and forecasting time series, i.e. extracting useful characteristics and statistics of data and predicting data, is useful for many fields that include, neuro-physiology, economics, environmental studies, transportation, etc. Another useful data representation we work with, are graphs. Graphs are complex data structures used to represent relational data in the form of vertices and edges. Graphs are present in various application domains, such as recommendation systems, road traffic analytics, web analysis, social media analysis. Due to the increasing size of graph data, a single machine is often not sufficient to process the complete graph. Therefore, the computation, as well as the data, must be distributed. Graph partitioning, the process of dividing graphs into subgraphs, is an essential step in distributed graph processing of large scale graphs because it enables parallel and distributed processing.The majority of data generated from IoT and social media originates as a continuous stream, such as series of events from a social media network, time series generated from sensors, financial transactions, etc. The stream processing paradigm refers to the processing of data streaming that is continuous and possibly unbounded. Combining both graphs and streams leads to an interesting and rather challenging domain of streaming graph analytics. Graph streams refer to data that is modelled as a stream of edges or vertices with adjacency lists representing relations between entities of continuously evolving data generated by a single or multiple data sources. Streaming graph analytics is an emerging research field with great potential due to its capabilities of processing large graph streams with limited amounts of memory and low latency. In this dissertation, we present graph partitioning techniques for scalable streaming graph and time series analysis. First, we present and evaluate the use of data partitioning to enable data parallelism in order to address the challenge of scale in large spatial time series forecasting. We propose a graph partitioning technique for large scale spatial time series forecasting of road traffic as a use-case. Our experimental results on traffic density prediction for real-world sensor dataset using Long Short-Term Memory Neural Networks show that the partitioning-based models take 12x lower training time when run in parallel compared to the unpartitioned model of the entire road infrastructure. Furthermore, the partitioning-based models have 2x lower prediction error (RMSE) compared to the entire road model. Second, we showcase the practical usefulness of streaming graph analytics for large spatial time series analysis with the real-world task of traffic jam detection and reduction. We propose to apply streaming graph analytics by performing useful analytics on traffic data stream at scale with high throughput and low latency. Third, we study, evaluate, and compare the existing state-of-the-art streaming graph partitioning algorithms. We propose a uniform analysis framework built using Apache Flink to evaluate and compare partitioning features and characteristics of streaming graph partitioning methods. Finally, we present GCNSplit, a novel ML-driven streaming graph partitioning solution, that uses a small and constant in-memory state (bounded state) to partition (possibly unbounded) graph streams. Our results demonstrate that \ours provides high-throughput partitioning and can leverage data parallelism to sustain input rates of 100K edges/s. GCNSplit exhibits a partitioning quality, in terms of graph cuts and load balance, that matches that of the state-of-the-art HDRF (High Degree Replicated First) algorithm while storing three orders of magnitude smaller partitioning state.
  •  
6.
  • Abbas, Zainab, et al. (författare)
  • Scaling Deep Learning Models for Large Spatial Time-Series Forecasting :
  • 2019
  • Ingår i: Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019. - : Institute of Electrical and Electronics Engineers Inc.. - 9781728108582 ; , s. 1587-1594
  • Konferensbidrag (refereegranskat)abstract
    • Neural networks are used for different machine learning tasks, such as spatial time-series forecasting. Accurate modelling of a large and complex system requires large datasets to train a deep neural network that causes a challenge of scale as training the network and serving the model are computationally and memory intensive. One example of a complex system that produces a large number of spatial time-series is a large road sensor infrastructure deployed for traffic monitoring. The goal of this work is twofold: 1) To model large amount of spatial time-series from road sensors; 2) To address the scalability problem in a real-life task of large-scale road traffic prediction which is an important part of an Intelligent Transportation System.We propose a partitioning technique to tackle the scalability problem that enables parallelism in both training and prediction: 1) We represent the sensor system as a directed weighted graph based on the road structure, which reflects dependencies between sensor readings, and weighted by sensor readings and inter-sensor distances; 2) We propose an algorithm to automatically partition the graph taking into account dependencies between spatial time-series from sensors; 3) We use the generated sensor graph partitions to train a prediction model per partition. Our experimental results on traffic density prediction using Long Short-Term Memory (LSTM) Neural Networks show that the partitioning-based models take 2x, if run sequentially, and 12x, if run in parallel, less training time, and 20x less prediction time compared to the unpartitioned model of the entire road infrastructure. The partitioning-based models take 100x less total sequential training time compared to single sensor models, i.e., one model per sensor. Furthermore, the partitioning-based models have 2x less prediction error (RMSE) compared to both the single sensor models and the entire road model. 
  •  
7.
  • Abbas, Zainab, 1991-, et al. (författare)
  • Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks
  • 2018
  • Ingår i: Proceedings - 2018 IEEE International Congress on Big Data, BigData Congress 2018 - Part of the 2018 IEEE World Congress on Services. - : Institute of Electrical and Electronics Engineers Inc.. - 9781538672327 ; , s. 57-65
  • Konferensbidrag (refereegranskat)abstract
    • Short-term traffic prediction allows Intelligent Transport Systems to proactively respond to events before they happen. With the rapid increase in the amount, quality, and detail of traffic data, new techniques are required that can exploit the information in the data in order to provide better results while being able to scale and cope with increasing amounts of data and growing cities. We propose and compare three models for short-term road traffic density prediction based on Long Short-Term Memory (LSTM) neural networks. We have trained the models using real traffic data collected by Motorway Control System in Stockholm that monitors highways and collects flow and speed data per lane every minute from radar sensors. In order to deal with the challenge of scale and to improve prediction accuracy, we propose to partition the road network into road stretches and junctions, and to model each of the partitions with one or more LSTM neural networks. Our evaluation results show that partitioning of roads improves the prediction accuracy by reducing the root mean square error by the factor of 5. We show that we can reduce the complexity of LSTM network by limiting the number of input sensors, on average to 35% of the original number, without compromising the prediction accuracy. .
  •  
8.
  • Abbas, Zainab, 1991-, et al. (författare)
  • Streaming Graph Partitioning: An Experimental Study
  • 2018
  • Ingår i: Proceedings of the VLDB Endowment. - : ACM Digital Library. - 2150-8097. ; 11:11, s. 1590-1603
  • Tidskriftsartikel (refereegranskat)abstract
    • Graph partitioning is an essential yet challenging task for massive graph analysis in distributed computing. Common graph partitioning methods scan the complete graph to obtain structural characteristics offline, before partitioning. However, the emerging need for low-latency, continuous graph analysis led to the development of online partitioning methods. Online methods ingest edges or vertices as a stream, making partitioning decisions on the fly based on partial knowledge of the graph. Prior studies have compared offline graph partitioning techniques across different systems. Yet, little effort has been put into investigating the characteristics of online graph partitioning strategies.In this work, we describe and categorize online graph partitioning techniques based on their assumptions, objectives and costs. Furthermore, we employ an experimental comparison across different applications and datasets, using a unified distributed runtime based on Apache Flink. Our experimental results showcase that model-dependent online partitioning techniques such as low-cut algorithms offer better performance for communication-intensive applications such as bulk synchronous iterative algorithms, albeit higher partitioning costs. Otherwise, model-agnostic techniques trade off data locality for lower partitioning costs and balanced workloads which is beneficial when executing data-parallel single-pass graph algorithms.
  •  
9.
  • Ahlberg, Michael, et al. (författare)
  • Router placement in wireless sensor networks
  • 2006
  • Ingår i: 2006 IEEE International Conference on Mobile Adhoc and Sensor Systems, Vols 1 and 2. - : IEEE. - 9781424405060 ; , s. 498-501
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we propose and evaluate algorithms for placement of routers in a wireless sensor network. There are two major requirements on router placement First, a placement must guarantee connectivity, i.e. every sensor must be able to communicate through routers with a predefined computer-connected gateway node. Second, a placement must provide robust communication in the case of router failures. This is achieved by placing redundant routers that increase the number of possible routes. Both requirements should be met by placing as few routers as possible. The proposed algorithms compute placement in an efficient and reasonably fast way.
  •  
10.
  • Al-Shishtawy, Ahmad, et al. (författare)
  • A design methodology for self-management in distributed environments
  • 2009
  • Ingår i: IEEE International conference on Computational Science and Engineering. - 9780769538235 ; , s. 430-436
  • Konferensbidrag (refereegranskat)abstract
    •   Autonomic computing is a paradigm that aims at reducing administrative overhead by providing autonomic managers to make applications selfmanaging. In order to better deal with dynamic environments, for improved performance and scalability, we advocate for distribution of management functions among several cooperative managers that coordinate their activities in order to achieve management objectives. We present a methodology for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to design and development of a distributed storage service as a case study. The storage service prototype has been developed using the distributing component management system Niche. Distribution of autonomic managers allows distributing the management overhead and increased management performance due to concurrency and better locality.
  •  
11.
  • Al-Shishtawy, Ahmad, 1978-, et al. (författare)
  • Achieving Robust Self-Management for Large-Scale Distributed Applications
  • 2010
  • Ingår i: Self-Adaptive and Self-Organizing Systems (SASO), 2010 4th IEEE International Conference on. - : IEEE Computer Society. - 9781424485376 ; , s. 31-40
  • Konferensbidrag (refereegranskat)abstract
    • Achieving self-management can be challenging, particularly in dynamic environments with resource churn (joins/leaves/failures). Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of robust management elements (RMEs), which are able to heal themselves under continuous churn. Using RMEs allows the developer to separate the issue of dealing with the effect of churn on management from the management logic. This facilitates the development of robust management by making the developer focus on managing the application while relying on the platform to provide the robustness of management. RMEs can be implemented as fault-tolerant long-living services. We present a generic approach and an associated algorithm to achieve fault-tolerant long-living services. Our approach is based on replicating a service using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. The algorithm uses P2P replica placement schemes to place replicas and uses the P2P overlay to monitor them. The replicated state machine is extended to analyze monitoring data in order to decide on when and where to migrate. We describe how to use our approach to achieve robust management elements. We present a simulation-based evaluation of our approach which shows its feasibility.
  •  
12.
  • Al-Shishtawy, Ahmad, et al. (författare)
  • Achieving Robust Self-Management for Large-Scale Distributed Applications
  • 2010. - 7
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Autonomic managers are the main architectural building blocks for constructing self-management capabilities of computing systems and applications. One of the major challenges in developing self-managing applications is robustness of management elements which form autonomic managers. We believe that transparent handling of the effects of resource churn (joins/leaves/failures) on management should be an essential feature of a platform for self-managing large-scale dynamic distributed applications, because it facilitates the development of robust autonomic managers and hence improves robustness of self-managing applications. This feature can be achieved by providing a robust management element abstraction that hides churn from the programmer. In this paper, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of nodes hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn. Our proposed decentralized algorithm uses peer-to-peer replica placement schemes to automate replicated state machine migration in order to tolerate churn. Our algorithm exploits lookup and failure detection facilities of a structured overlay network for managing the set of active replicas. Using the proposed approach, we can achieve a long running and highly available service, without human intervention, in the presence of resource churn. In order to validate and evaluate our approach, we have implemented a prototype that includes the proposed algorithm.
  •  
13.
  • Al-Shishtawy, Ahmad, 1978-, et al. (författare)
  • Distributed Control Loop Patterns for Managing Distributed Applications
  • 2008
  • Ingår i: SASOW 2008. - LOS ALAMITOS : IEEE Computer Society. - 9781424434688 ; , s. 260-265
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we discuss various control loop patterns for managing distributed applications with multiple control loops. We introduce a high-level framework, called DCMS, for developing, deploying and managing component-based distributed applications in dynamic environments. The control loops, and interactions among them, are illustrated in the context of a distributed self-managing storage service implemented using DCMS to achieve various self-* properties. Different control loops are used for different self-* behaviours, which illustrates one way to divide application management, which makes for both ease of development and for better scalability and robustness when managers are distributed. As the multiple control loops are not completely independent, we demonstrate different patterns to deal with the interaction and potential conflict between multiple managers.
  •  
14.
  • Al-Shishtawy, Ahmad, 1978-, et al. (författare)
  • ElastMan : Autonomic Elasticity Manager for Cloud-Based Key-Value Stores
  • 2012
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The increasing spread of elastic Cloud services, together with the pay-asyou-go pricing model of Cloud computing, has led to the need of an elasticity controller. The controller automatically resizes an elastic service, in response to changes in workload, in order to meet Service Level Objectives (SLOs) at a reduced cost. However, variable performance of Cloud virtual machines and nonlinearities in Cloud services, such as the diminishing reward of adding a service instance with increasing the scale, complicates the controller design. We present the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores. ElastMan combines feedforward and feedback control. Feedforward control is used to respond to spikes in the workload by quickly resizing the service to meet SLOs at a minimal cost. Feedback control is used to correct modeling errors and to handle diurnal workload. To address nonlinearities, our design of ElastMan leverages the near-linear scalability of elastic Cloud services in order to build a scale-independent model of the service. Our design based on combining feedforward and feedback control allows to efficiently handle both diurnal and rapid changes in workload in order to meet SLOs at a minimal cost. Our evaluation shows the feasibility of our approach to automation of Cloud service elasticity.
  •  
15.
  • Al-Shishtawy, Ahmad, 1978-, et al. (författare)
  • ElastMan : Autonomic elasticity manager for cloud-based key-value stores
  • 2013
  • Ingår i: HPDC 2013 - Proceedings of the 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing. - New York, NY, USA : ACM. - 9781450319102 ; , s. 115-116
  • Konferensbidrag (refereegranskat)abstract
    • The increasing spread of elastic Cloud services, together with the pay-as-you-go pricing model of Cloud computing, has led to the need of an elasticity controller. The controller automatically resizes an elastic service in response to changes in workload, in order to meet Service Level Objectives (SLOs) at a reduced cost. However, variable performance of Cloud virtual machines and nonlinearities in Cloud services complicates the controller design. We present the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores. ElastMan combines feedforward and feedback control. Feedforward control is used to respond to spikes in the workload by quickly resizing the service to meet SLOs at a minimal cost. Feedback control is used to correct modeling errors and to handle diurnal workload. We have implemented and evaluated ElastMan using the Voldemort key-value store running in a Cloud environment based on OpenStack. Our evaluation shows the feasibility and effectiveness of our approach to automation of Cloud service elasticity.
  •  
16.
  • Al-Shishtawy, Ahmad, et al. (författare)
  • ElastMan : Elasticity manager for elastic key-value stores in the cloud
  • 2013
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, USA : ACM. - 9781450321723
  • Konferensbidrag (refereegranskat)abstract
    • The increasing spread of elastic Cloud services, together with the pay-as-you-go pricing model of Cloud computing, has led to the need of an elasticity controller. The controller automatically resizes an elastic service in response to changes in workload, in order to meet Service Level Objectives (SLOs) at a reduced cost. However, variable performance of Cloud Virtual Machines and nonlinearities in Cloud services, such as the diminishing reward of adding a service instance with increasing the scale, complicates the controller design. We present the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores. ElastMan combines feedforward and feedback control. Feedforward control is used to respond to spikes in the workload by quickly resizing the service to meet SLOs at a minimal cost. Feedback control is used to correct modeling errors and to handle diurnal workload. To address nonlinearities, our design of ElastMan leverages the near-linear scalability of elastic Cloud services in order to build a scale-independent model of the service. We have implemented and evaluated ElastMan using the Voldemort key-value store running in an OpenStack Cloud environment. Our evaluation shows the feasibility and effectiveness of our approach to automation of Cloud service elasticity.
  •  
17.
  • Al-Shishtawy, Ahmad, 1978- (författare)
  • Enabling and Achieving Self-Management for Large Scale Distributed Systems : Platform and Design Methodology for Self-Management
  • 2010
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Autonomic computing is a paradigm that aims at reducing administrative overhead by using autonomic managers to make applications self-managing. To better deal with large-scale dynamic environments; and to improve scalability, robustness, and performance; we advocate for distribution of management functions among several cooperative autonomic managers that coordinate their activities in order to achieve management objectives. Programming autonomic management in turn requires programming environment support and higher level abstractions to become feasible. In this thesis we present an introductory part and a number of papers that summaries our work in the area of autonomic computing. We focus on enabling and achieving self-management for large scale and/or dynamic distributed applications. We start by presenting our platform, called Niche, for programming self-managing component-based distributed applications. Niche supports a network-transparent view of system architecture simplifying designing application self-* code.  Niche provides a concise and expressive API for self-* code. The implementation of the framework relies on scalability and robustness of structured overlay networks. We have also developed a distributed file storage service, called YASS, to illustrate and evaluate Niche. After introducing Niche we proceed by presenting a methodology and design space for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to the design and development of an improved version of our distributed storage service YASS as a case study. We continue by presenting a generic policy-based management framework which has been integrated into Niche. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using our self-managing file storage application YASS as a case study. Finally, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of resource hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn.  
  •  
18.
  • Al-Shishtawy, Ahmad, et al. (författare)
  • Enabling Self-Management Of Component Based Distributed Applications
  • 2008
  • Ingår i: FROM GRIDS TO SERVICE AND PERVASIVE COMPUTING. - Boston, MA : Springer-Verlag New York. - 9780387094557 ; , s. 163-174
  • Konferensbidrag (refereegranskat)abstract
    • Deploying and managing distributed applications in dynamic Grid environments requires a high degree of autonomous management. Programming autonomous management in turn requires programming environment support and higher level abstractions to become feasible. We present a framework for programming self-managing component-based distributed applications. The framework enables the separation of application’s functional and non-functional (self-*) parts. The framework extends the Fractal component model by the component group abstraction and one-to-any and one-to-all bindings between components and groups. The framework supports a network-transparent view of system architecture simplifying designing application self-* code. The framework provides a concise and expressive API for self-* code. The implementation of the framework relies on scalability and robustness of the Niche structured p2p overlay network. We have also developed a distributed file storage service to illustrate and evaluate our framework.
  •  
19.
  • Al-Shishtawy, Ahmad, et al. (författare)
  • Policy based self-management in distributed environments
  • 2010
  • Ingår i: 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems Workshop (SASOW). - : IEEE Computer Society Digital Library. - 9781424486847 ; , s. 256-260
  • Konferensbidrag (refereegranskat)abstract
    •   Currently, increasing costs and escalating complexities are primary issues in the distributed system management. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. This paper presents a generic policy-based management framework which has been integrated into an existing distributed component management system, called Niche, that enables and supports self-management. In this framework, programmers can set up more than one Policy-Manager-Group to avoid centralized policy decision making which could become a performance bottleneck. Furthermore, the size of a Policy-Manager-Group, i.e. the number of Policy-Managers in the group, depends on their load, i.e. the number of requests per time unit. In order to achieve good load balancing, a policy request is delivered to one of the policy managers in the group randomly chosen on the fly. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using a self-managing file storage application as a case study.
  •  
20.
  • Al-Shishtawy, Ahmad, et al. (författare)
  • Robust Fault-Tolerant Majority-Based Key-Value Store Supporting Multiple Consistency Levels
  • 2011
  • Ingår i: 2011 IEEE 17TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS). - 9780769545769 ; , s. 589-596
  • Konferensbidrag (refereegranskat)abstract
    • The wide spread of Web 2.0 applications with rapidly growing amounts of user generated data, such as, wikis, social networks, and media sharing, have posed new challenges on the supporting infrastructure, in particular, on storage systems. In order to meet these challenges, Web 2.0 applications have to tradeoff between the high availability and the consistency of their data. Another important issue is the privacy of user generated data that might be caused by organizations that own and control datacenters where user data are stored. We propose a large-scale, robust and fault-tolerant key-value object store that is based on a peer-to-peer network owned and controlled by a community of users. To meet the demands of Web 2.0 applications, the store supports an API consisting of different read and write operations with various data consistency guarantees from which a wide range of web applications would be able to choose the operations according to their data consistency, performance and availability requirements. For evaluation, simulation has been carried out to test the system availability, scalability and fault-tolerance in a dynamic, Internet wide environment.
  •  
21.
  • Apolonia, Nuno, et al. (författare)
  • Gossip-based service monitoring platform for wireless edge cloud computing
  • 2017
  • Ingår i: Proceedings IEEE 14th International Conference on Networking, Sensing and Control (ICNSC). - : Institute of Electrical and Electronics Engineers (IEEE).
  • Konferensbidrag (refereegranskat)abstract
    • Edge cloud computing proposes to support shared services, by using the infrastructure at the network's edge. An important problem is the monitoring and management of services across the edge environment. Therefore, dissemination and gathering of data is not straightforward, differing from the classic cloud infrastructure. In this paper, we consider the environment of community networks for edge cloud computing, in which the monitoring of cloud services is required. We propose a monitoring platform to collect near real-time data about the services offered in the community network using a gossip-enabled network. We analyze and apply this gossip-enabled network to perform service discovery and information sharing, enabling data dissemination among the community. We implemented our solution as a prototype and used it for collecting service monitoring data from the real operational community network cloud, as a feasible deployment of our solution. By means of emulation and simulation we analyze in different scenarios, the behavior of the gossip overlay solution, and obtain average results regarding information propagation and consistency needs, i.e. in high latency situations, data convergence occurs within minutes.
  •  
22.
  • Apolonia, Nuno, 1984- (författare)
  • On Service Optimization in Community Network Micro-Clouds
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Internet coverage in the world is still weak and local communities are required to come together and build their own network infrastructures. People collaborate for the common goal of accessing the Internet and cloud services by building Community networks (CNs).The use of Internet cloud services has grown over the last decade. Community network cloud infrastructures (i.e. micro-clouds) have been introduced to run services inside the network, without the need to consume them from the Internet. CN micro-clouds aims for not only an improved service performance, but also an entry point for an alternative to Internet cloud services in CNs. However, the adaptation of the services to be used in CN micro-clouds have their own challenges since the use of low-capacity devices and wireless connections without a central management is predominant in CNs. Further, large and irregular topology of the network, high software and hardware diversity and different service requirements in CNs, makes the CN micro-clouds a challenging environment to run local services, and to achieve service performance and quality similar to Internet cloud services. In this thesis, our main objective is the optimization of services (performance, quality) in CN micro-clouds, facilitating entrance to other services and motivating members to make use of CN micro-cloud services as an alternative to Internet services. We present an approach to handle services in CN micro-cloud environments in order to improve service performance and quality that can be approximated to Internet services, while also giving to the community motivation to use CN micro-cloud services. Furthermore, we break the problem into different levels (resource, service and middleware), propose a model that provides improvements for each level and contribute with information that helps to support the improvements (in terms of service performance and quality) in the other levels.At the resource level, we facilitate the use of community devices by utilizing virtualization techniques that isolate and manage CN micro-cloud services in order to have a multi-purpose environment that fosters services in the CN micro-cloud environment.At the service level, we build a monitoring tool tailored for CN micro-clouds that helps us to analyze service behavior and performance in CN micro-clouds. Subsequently, the information gathered enables adaptation of the services to the environment in order to improve their quality and performance under CN environments. At the middleware level, we build overlay networks as the main communication system according to the social information in order to improve paths and routes of the nodes, and improve transmission of data across the network by utilizing the relationships already established in the social network or community of practices that are related to the CNs. Therefore, service performance in CN micro-clouds can become more stable with respect to resource usage, performance and user perceived quality.
  •  
23.
  • Arman, Ala, et al. (författare)
  • Elasticity controller for Cloud-based key-value stores
  • 2012
  • Ingår i: Parallel and Distributed Systems (ICPADS), 2012 IEEE 18th International Conference on. - : IEEE. - 9780769549033 ; , s. 268-275
  • Konferensbidrag (refereegranskat)abstract
    • Clouds provide an illusion of an infinite amount of resources and enable elastic services and applications that are capable to scale up and down (grow and shrink by requesting and releasing resources) in response to changes in its environment, workload, and Quality of Service (QoS) requirements. Elasticity allows to achieve required QoS at a minimal cost in a Cloud environment with its pay-as-you-go pricing model. In this paper, we present our experience in designing a feedback elastically controller for a key-value store. The goal of our research is to investigate the feasibility of the control theoretic approach to the automation of elasticity of Cloud-based key-value stores. We describe design steps necessary to build a feedback controller for a real system, namely Voldemort, which we use as a case study in this work. The design steps include defining touchpoints (sensors and actuators), system identification, and controller design. We have designed, developed, and implemented a prototype of the feedback elasticity controller for Voldemort. Our initial evaluation results show the feasibility of using feedback control to automate elasticity of distributed keyvalue stores.
  •  
24.
  • Arsalan, Muhammad, et al. (författare)
  • Energy-Efficient Privacy-Preserving Time-Series Forecasting on User Health Data Streams
  • 2022
  • Ingår i: Proceedings - 2022 IEEE 21st International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2022. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 541-546
  • Konferensbidrag (refereegranskat)abstract
    • Health monitoring devices are gaining popularity both as wellness tools and as a source of information for healthcare decisions. In this work, we use Spiking Neural Networks (SNNs) for time-series forecasting due to their proven energy-saving capabilities. Thanks to their design that closely mimics the natural nervous system, SNNs are energy-efficient in contrast to classic Artificial Neural Networks (ANNs). We design and implement an energy-efficient privacy-preserving forecasting system on real-world health data streams using SNNs and compare it to a state-of-the-art system with Long short-term memory (LSTM) based prediction model. Our evaluation shows that SNNs tradeoff accuracy (2.2x greater error), to grant a smaller model (19% fewer parameters and 77% less memory consumption) and a 43% less training time. Our model is estimated to consume 3.36 mu J energy, which is significantly less than the traditional ANNs. Finally, we apply epsilon-differential privacy for enhanced privacy guarantees on our federated learning-based models. With differential privacy of epsilon = 0.1, our experiments report an increase in the measured average error (RMSE) of only 25%.
  •  
25.
  • Asratyan, Albert, et al. (författare)
  • A Parallel Chain Mail Approach for Scalable Spatial Data Interpolation
  • 2021
  • Ingår i: 2021 IEEE International Conference on Big Data (Big Data). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 306-314
  • Konferensbidrag (refereegranskat)abstract
    • Deteriorating air quality is a growing concern that has been linked to many health-related issues. Its monitoring is a good first step to understanding the problem. However, it is not always possible to collect air quality data from every location. Various data interpolation techniques are used to assist with populating sparse maps with more context, but many of these algorithms are computationally expensive. This work introduces a three-step Chain Mail algorithm that uses kriging (without any modifications to the base algorithm) and achieves up to ×100 execution time improvement with minimal accuracy loss (relative RMSE of 3%) by running concurrent interpolation executions. This approach can be described as a multiple-step parallel interpolation algorithm that includes specific regional border data manipulation for achieving greater accuracy. It does so by interpolating geographically defined data chunks in parallel and sharing the results with their neighboring nodes to provide context and compensate for lack of knowledge of the surrounding areas. Combined with a serverless cloud architecture, this approach opens doors to interpolating large data sets in a matter of minutes while remaining cost-efficient. The effectiveness of the three-step Chain Mail approach depends on the equal point distribution among all nodes and the resolution of the parallel configuration. In general, it offers a good balance between execution speed and accuracy.
  •  
26.
  • Attieh, Joseph, 1998-, et al. (författare)
  • Optimizing the Performance of Text Classification Models by Improving the Isotropy of the Embeddings Using a Joint Loss Function
  • 2023
  • Ingår i: Document Analysis and Recognition. - Cham : Springer Nature. ; , s. 121-136
  • Konferensbidrag (refereegranskat)abstract
    • Recent studies show that the spatial distribution of the sentence representations generated from pre-trained language models is highly anisotropic. This results in a degradation in the performance of the models on the downstream task. Most methods improve the isotropy of the sentence embeddings by refining the corresponding contextual word representations, then deriving the sentence embeddings from these refined representations. In this study, we propose to improve the quality of the sentence embeddings extracted from the [CLS] token of the pre-trained language models by improving the isotropy of the embeddings. We add one feed-forward layer between the model and the downstream task layers, and we train it using a novel joint loss function. The proposed approach results in embeddings with better isotropy, that generalize better on the downstream task. Experimental results on 3 GLUE datasets with classification as the downstream task show that our proposed method is on par with the state-of-the-art, as it achieves performance gains of around 2–3% on the downstream tasks compared to the baseline.
  •  
27.
  • Awan, Ahsan Javed, 1988-, et al. (författare)
  • Architectural Impact on Performance of In-memoryData Analytics: Apache Spark Case Study
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • While cluster computing frameworks are contin-uously evolving to provide real-time data analysis capabilities,Apache Spark has managed to be at the forefront of big data an-alytics for being a unified framework for both, batch and streamdata processing. However, recent studies on micro-architecturalcharacterization of in-memory data analytics are limited to onlybatch processing workloads. We compare micro-architectural per-formance of batch processing and stream processing workloadsin Apache Spark using hardware performance counters on a dualsocket server. In our evaluation experiments, we have found thatbatch processing are stream processing workloads have similarmicro-architectural characteristics are bounded by the latency offrequent data access to DRAM. For data accesses we have foundthat simultaneous multi-threading is effective in hiding the datalatencies. We have also observed that (i) data locality on NUMAnodes can improve the performance by 10% on average and(ii)disabling next-line L1-D prefetchers can reduce the executiontime by up-to 14% and (iii) multiple small executors can provideup-to 36% speedup over single large executor
  •  
28.
  • Awan, Ahsan Javed, 1988-, et al. (författare)
  • How Data Volume Affects Spark Based Data Analytics on a Scale-up Server
  • 2015
  • Ingår i: Big Data Benchmarks, Performance Optimization, and Emerging Hardware. - Cham : Springer. - 9783319290058 ; , s. 81-92
  • Konferensbidrag (refereegranskat)abstract
    • Sheer increase in volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark is gaining popularity for exhibiting superior scale-out performance on the commodity machines, the impact of data volume on the performance of Spark based data analytics in scale-up configuration is not well understood. We present a deep-dive analysis of Spark based applications on a large scale-up server machine. Our analysis reveals that Spark based data analytics are DRAM bound and do not benefit by using more than 12 cores for an executor. By enlarging input data size, application performance degrades significantly due to substantial increase in wait time during I/O operations and garbage collection, despite 10 % better instruction retirement rate (due to lower L1 cache misses and higher core utilization). We match memory behaviour with the garbage collector to improve performance of applications between 1.6x to 3x.
  •  
29.
  • Awan, Ahsan Javed, 1988-, et al. (författare)
  • Micro-architectural Characterization of Apache Spark on Batch and Stream Processing Workloads
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • While cluster computing frameworks are continuously evolving to provide real-time data analysis capabilities, Apache Spark has managed to be at the forefront of big data analytics for being a unified framework for both, batch and stream data processing. However, recent studies on micro-architectural characterization of in-memory data analytics are limited to only batch processing workloads. We compare the micro-architectural performance of batch processing and stream processing workloads in Apache Spark using hardware performance counters on a dual socket server. In our evaluation experiments, we have found that batch processing and stream processing has same micro-architectural behavior in Spark if the difference between two implementations is of micro-batching only. If the input data rates are small, stream processing workloads are front-end bound. However, the front end bound stalls are reduced at larger input data rates and instruction retirement is improved. Moreover, Spark workloads using DataFrames have improved instruction retirement over workloads using RDDs.
  •  
30.
  • Awan, Ahsan Javed, 1988-, et al. (författare)
  • Node architecture implications for in-memory data analytics on scale-in clusters
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • While cluster computing frameworks are continuously evolving to provide real-time data analysis capabilities, Apache Spark has managed to be at the forefront of big data analytics. Recent studies propose scale-in clusters with in-storage processing devices to process big data analytics with Spark However the proposal is based solely on the memory bandwidth characterization of in-memory data analytics and also does not shed light on the specification of host CPU and memory. Through empirical evaluation of in-memory data analytics with Apache Spark on an Ivy Bridge dual socket server, we have found that (i) simultaneous multi-threading is effective up to 6 cores (ii) data locality on NUMA nodes can improve the performance by 10% on average, (iii) disabling next-line L1-D prefetchers can reduce the execution time by up to 14%, (iv) DDR3 operating at 1333 MT/s is sufficient and (v) multiple small executors can provide up to 36% speedup over single large executor.
  •  
31.
  • Awan, Ahsan Javed, 1988- (författare)
  • Performance Characterization and Optimization of In-Memory Data Analytics on a Scale-up Server
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The sheer increase in the volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark defines the state of the art in big data analytics platforms for (i) exploiting data-flow and in-memory computing and (ii) for exhibiting superior scale-out performance on the commodity machines, little effort has been devoted to understanding the performance of in-memory data analytics with Spark on modern scale-up servers. This thesis characterizes the performance of in-memory data analytics with Spark on scale-up servers.Through empirical evaluation of representative benchmark workloads on a dual socket server, we have found that in-memory data analytics with Spark exhibit poor multi-core scalability beyond 12 cores due to thread level load imbalance and work-time inflation (the additional CPU time spent by threads in a multi-threaded computation beyond the CPU time required to perform the same work in a sequential computation). We have also found that workloads are bound by the latency of frequent data accesses to the memory. By enlarging input data size, application performance degrades significantly due to the substantial increase in wait time during I/O operations and garbage collection, despite 10% better instruction retirement rate (due to lower L1cache misses and higher core utilization).For data accesses, we have found that simultaneous multi-threading is effective in hiding the data latencies. We have also observed that (i) data locality on NUMA nodes can improve the performance by 10% on average,(ii) disabling next-line L1-D prefetchers can reduce the execution time by upto14%. For garbage collection impact, we match memory behavior with the garbage collector to improve the performance of applications between 1.6xto 3x and recommend using multiple small Spark executors that can provide up to 36% reduction in execution time over single large executor. Based on the characteristics of workloads, the thesis envisions near-memory and near storage hardware acceleration to improve the single-node performance of scale-out frameworks like Apache Spark. Using modeling techniques, it estimates the speed-up of 4x for Apache Spark on scale-up servers augmented with near-data accelerators.
  •  
32.
  • Awan, Ahsan Javed, 1988- (författare)
  • Performance Characterization of In-Memory Data Analytics on a Scale-up Server
  • 2016
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The sheer increase in volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark defines the state of the art in big data analytics platforms for (i) exploiting data-flow and in-memory computing and (ii) for exhibiting superior scale-out performance on the commodity machines, little effort has been devoted at understanding the performance of in-memory data analytics with Spark on modern scale-up servers. This thesis characterizes the performance of in-memory data analytics with Spark on scale-up servers.Through empirical evaluation of representative benchmark workloads on a dual socket server, we have found that in-memory data analytics with Spark exhibit poor multi-core scalability beyond 12 cores due to thread level load imbalance and work-time inflation. We have also found that workloads are bound by the latency of frequent data accesses to DRAM. By enlarging input data size, application performance degrades significantly due to substantial increase in wait time during I/O operations and garbage collection, despite 10% better instruction retirement rate (due to lower L1 cache misses and higher core utilization).For data accesses we have found that simultaneous multi-threading is effective in hiding the data latencies. We have also observed that (i) data locality on NUMA nodes can improve the performance by 10% on average, (ii) disabling next-line L1-D prefetchers can reduce the execution time by up-to 14%. For GC impact, we match memory behaviour with the garbage collector to improve performance of applications between 1.6x to 3x. and recommend to use multiple small executors that can provide up-to 36% speedup over single large executor.
  •  
33.
  • Baig, Roger, et al. (författare)
  • Cloud-based community services in community networks
  • 2016
  • Ingår i: 2016 International Conference on Computing, Networking and Communications, ICNC 2016. - : IEEE conference proceedings. - 9781467385794 ; , s. 1-5
  • Konferensbidrag (refereegranskat)abstract
    • Wireless networks have shown to be a cost effective solution for an IP-based communication infrastructure in under-served areas. Services and application, if deployed within these wireless networks, add value for the users. This paper shows how cloud infrastructures have been made operational in a community wireless network, as a particular case of a community cloud, developed according to the specific requirements and conditions of the community. We describe the conditions and requirements of such a community cloud and explain our technical choices and experience in its deployment in the community network. The user take-up has started, and our case supports the tendency of cloud computing moving towards the network edge.
  •  
34.
  • Baig, Roger, et al. (författare)
  • Community clouds at the edge deployed in Guifi.net
  • 2015
  • Konferensbidrag (refereegranskat)abstract
    • Community clouds are a cloud deployment model in which the cloud infrastructure is built with specific features for a community of users with shared concerns, goals, and interests. Commercialcommunity clouds already operate in several application areas such as in the finance, government and health, fulfilling community-specific requirements. In this demo, a community cloud for citizens is presented. It is formed by devices at the edge of the network, contributed by the members of acommunity network and brought together into a distributed community cloud system through the Cloudy distribution. The demonstration shows to the audience in a live access the deployedcommunity cloud from the perspective of the user, by accessing a Cloudy node, inspecting the services available in the community cloud, and showing the usage of some of its services.
  •  
35.
  • Baig, Roger, et al. (författare)
  • Community network clouds as a case for the IEEE Intercloud standardization
  • 2015
  • Ingår i: 2015 IEEE Conference on Standards for Communications and Networking, CSCN 2015. - 9781479989287 ; , s. 269-274
  • Konferensbidrag (refereegranskat)abstract
    • The IEEE P2302 Intercloud WG conducts work since 2011 on the project Standard for Intercloud Interoperability and Federation with the goal to define a standard architecture and building components for large-scale interoperability of independent cloud providers. While the standardization process has achieved fine-grained definitions of several Intercloud components, a deployment of the Intercloud to demonstrate the architectural feasibility is not yet operational. In this paper, we describe a deployed community network cloud and we show how it matches in several aspects the vision of the Intercloud. Similar to the Intercloud, the community network cloud consists of many small cloud providers, which for interoperability use a set of common services. In this sense, the community network cloud is a real use case for elements that the Intercloud standardization WG envisions, and can feed back to and even become part of the Intercloud. In fact, a study on Small or Medium Enterprise (SME) provided commercial services in the community network cloud indicates the importance of the success of the Intercloud standardization initiative for SMEs.
  •  
36.
  • Baig, Roger, et al. (författare)
  • Deploying Clouds in the Guifi Community Network
  • 2015
  • Ingår i: Proceedings of the 2015 IFIP/IEEE International Symposium on Integrated Network Management, IM 2015. - : IEEE. ; , s. 1020-1025
  • Konferensbidrag (refereegranskat)abstract
    • This paper describes an operational geographically distributed and heterogeneous cloudinfrastructure with services and applications deployed in the Guifi community network. The presentedcloud is a particular case of a community cloud, developed according to the specific needs and conditions of community networks. We describe the concept of this community cloud, explain our technical choices for building it, and our experience with the deployment of this cloud. We review our solutions and experience on offering the different service models of cloud computing (IaaS, PaaS and SaaS) in community networks. The deployed cloud infrastructure aims to provide stable and attractive cloud services in order to encourage community network user to use, keep and extend it with new services and applications.
  •  
37.
  • Baig, Roger, et al. (författare)
  • Experiences in Building Micro-Cloud Provider Federation in the Guifi Community Network
  • 2015
  • Ingår i: 2015 IEEE/ACM 8TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC). - 9780769556970 ; , s. 516-521
  • Konferensbidrag (refereegranskat)abstract
    • Cloud federation is foreseen to happen among large cloud providers. The resulting interoperability of cloud services among these providers will then increase even more the elasticity of cloud services. The cloud provisioned that is targeted by this scenario is mainly one which combines the cloud services offered by large enterprises. Cloud computing, however, has started moving to the edge. We now increasingly see the tendency to fullfil cloud computing requirements by multiple levels and different kind of infrastructures, where the Fog Computing paradigm has started playing its role. For this scenario of edge computing, we show in this paper the case of the federation of multiple independent micro-cloud providers within a community network, where providers pool their resources and services into a community cloud. Federation happens here primarily at the service level and the domain of trust is the community of practice. While we can today already report this case in the context of community networks, IPv6 deployment in the Internet will principally allow micro-cloud providers to appear everywhere, needing cloud federation mechanisms. We describe for a real case how this micro-cloud provider federation has been built and argue why micro-cloud provider should be considered for the integration in cloud federations.
  •  
38.
  • Baig, Roger, et al. (författare)
  • The Cloudy Distribution in Community Network Clouds in Guifi.net
  • 2015
  • Konferensbidrag (refereegranskat)abstract
    • This demo paper presents Cloudy, a Debian-based distribution to build and deploy clouds incommunity networks. The demonstration covers the following aspects: Installation of Cloudy, theCloudy GUI for usage and administration by end users, demonstration of Cloudy nodes and services deployed in the Guifi community network.
  •  
39.
  • Brand, Per, et al. (författare)
  • The Role of Overlay Services In a Self-Managing Framework for Dynamic Virtual Organizations
  • 2008
  • Ingår i: Making Grids Work. - Boston, MA : Springer-Verlag New York. - 9780387784489 ; , s. 153-164
  • Konferensbidrag (refereegranskat)abstract
    • We combine and extend recent results in autonomic computing and structuredpeer-to-peer to build an infrastructure for constructing and managing dynamic vir-tual organizations. The paper focuses on the middle layer of the proposed infras-tructure, in-between the Niche overlay system on the bottom, and an architecture-based management system based on Jade on the top.  The middle layer, theoverlay services, are responsible for all sensing and actuation carried out by theVO management. We describe in detail the API of the resource and componentoverlay services both on the management node and the nodes hosting resources.We present a simple use case demonstrating resource discovery, initial deploy-ment, self-configuration as a result of resource availability change, self-healing,self-tuning and self-protection. The advantages of the design are 1) the overlayservices are in themselves self-managing, and sensor/actuation services they pro-vide are robust, 2) management can be dealt with declaratively and at a high-level,and 3) the overlay services provide good scalability in dynamic VOs.
  •  
40.
  • Carbone, Paris, et al. (författare)
  • Auto-Scoring of Personalised News in the Real-Time Web : Challenges, Overview and Evaluation of the State-of-the-Art Solutions
  • 2015
  • Konferensbidrag (refereegranskat)abstract
    • The problem of automated personalised news recommendation, often referred as auto-scoring has attracted substantial research throughout the last decade in multiple domains such as data mining and machine learning, computer systems, e commerce and sociology. A typical "recommender systems" approach to solving this problem usually adopts content-based scoring, collaborative filtering or more often a hybrid approach. Due to their special nature, news articles introduce further challenges and constraints to conventional item recommendation problems, characterised by short lifetime and rapid popularity trends. In this survey, we provide an overview of the challenges and current solutions in news personalisation and ranking from both an algorithmic and system design perspective, and present our evaluation of the most representative scoring algorithms while also exploring the benefits of using a hybrid approach. Our evaluation is based on a real-life case study in news recommendations.
  •  
41.
  • Chen, J., et al. (författare)
  • Message from BDCloud Chairs
  • 2015
  • Ingår i: Proceedings - 4th IEEE International Conference on Big Data and Cloud Computing, BDCloud 2014 with the 7th IEEE International Conference on Social Computing and Networking, SocialCom 2014 and the 4th International Conference on Sustainable Computing and Communications, SustainCom 2014. ; , s. xv-xvi
  • Konferensbidrag (refereegranskat)
  •  
42.
  • Chikafa, Gibson, 1993-, et al. (författare)
  • Cloud-native RStudio on Kubernetes for Hopsworks
  • 2023
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • In order to fully benefit from cloud computing, services are designed following the “multi-tenant” architectural model, which is aimed at maximizing resource sharing among users. However, multi-tenancy introduces challenges of security, performance isolation, scaling, and customization. RStudio server is an open-source Integrated Development Environment (IDE) accessible over a web browser for the R programming language. We present the design and implementation of a multi-user distributed system on Hopsworks, a data-intensive AI platform, following the multi-tenant model that provides RStudio as Software as a Service (SaaS). We use the most popular cloud-native technologies: Docker and Kubernetes, to solve the problems of performance isolation, security, and scaling that are present in a multi-tenant environment. We further enable secure data sharing in RStudio server instances to provide data privacy and allow collaboration among RStudio users. We integrate our system with Apache Spark, which can scale and handle Big Data processing workloads. Also, we provide a UI where users can provide custom configurations and have full control of their own RStudio server instances. Our system was tested on a Google Cloud Platform cluster with four worker nodes, each with 30GB of RAM allocated to them. The tests on this cluster showed that 44 RStudio servers, each with 2GB of RAM, can be run concurrently. Our system can scale out to potentially support hundreds of concurrently running RStudio servers by adding more resources (CPUs and RAM) to the cluster or system.
  •  
43.
  • Danniswara, Ken, et al. (författare)
  • Stream Processing in Community Network Clouds
  • 2015
  • Ingår i: Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on. - : IEEE conference proceedings. ; , s. 800-805
  • Konferensbidrag (refereegranskat)abstract
    • Community Network Cloud is an emerging distributed cloud infrastructure that is built on top of a community network. The infrastructure consists of a number of geographically distributed compute and storage resources, contributed by community members, that are linked together through the community network. Stream processing is an important enabling technology that, if provided in a Community Network Cloud, would enable a new class of applications, such as social analysis, anomaly detection, and smart home power management. However, modern stream processing engines are designed to be used inside a data center, where servers communicate over a fast and reliable network. In this work, we evaluate the Apache Storm stream processing framework in an emulated Community Network Cloud in order to identify the challenges and bottlenecks that exist in the current implementation. The community network emulation was performed using data collected from the Guifi.net community network, Spain. Our evaluation results show that, with proper configuration of the heartbeats, it is possible to run Apache Storm in a Community Network Cloud. The performance is sensitive to the placement of the Storm components in the network. The deployment of management components on wellconnected nodes improves the Storm topology scheduling time, fault tolerance, and recovery time. Our evaluation also indicates that the Storm scheduler and the stream groupings need to be aware of the network topology and location of stream sources in order to optimally place Storm spouts and bolts to improve performance.
  •  
44.
  • de la Rua Martinez, Javier, et al. (författare)
  • The Hopsworks Feature Store for Machine Learning
  • 2024
  • Ingår i: SIGMOD-Companion 2024 - Companion of the 2024 International Conferaence on Management of Data. - : Association for Computing Machinery (ACM). ; , s. 135-147
  • Konferensbidrag (refereegranskat)abstract
    • Data management is the most challenging aspect of building Machine Learning (ML) systems. ML systems can read large volumes of historical data when training models, but inference workloads are more varied, depending on whether it is a batch or online ML system. The feature store for ML has recently emerged as a single data platform for managing ML data throughout the ML lifecycle, from feature engineering to model training to inference. In this paper, we present the Hopsworks feature store for machine learning as a highly available platform for managing feature data with API support for columnar, row-oriented, and similarity search query workloads. We introduce and address challenges solved by the feature stores related to feature reuse, how to organize data transformations, and how to ensure correct and consistent data between feature engineering, model training, and model inference. We present the engineering challenges in building high-performance query services for a feature store and show how Hopsworks outperforms existing cloud feature stores for training and online inference query workloads.
  •  
45.
  • de Palma, Noel, et al. (författare)
  • Tools for Architecture Based Autonomic Systems
  • 2009
  • Ingår i: ICAS. - : IEEE Communications Society. - 9781424436842 ; , s. 313-320
  • Konferensbidrag (refereegranskat)abstract
    • Recent years have seen a growing interest in autonomic computing, an approach to providing systems with self managing properties. Autonomic computing aims to address the increasing complexity of the administration of large systems. The contribution of this paper is to provide a generic tool to ease the development of autonomic managers. Using this tool, an administrator provides a set of alternative architectures and specifies conditions that are used by autonomic managers to update architectures at runtime. Software changes are computed as architectural differences in terms of component model artifacts (components, attributes, bindings, etc.). These differences are then used to migrate into the next architecture by reconfiguring only the required part of the running system.
  •  
46.
  • De Palma, Noel, et al. (författare)
  • Tools for Autonomic Computing
  • 2009. - 10
  • Ingår i: 5th International Conference on Autonomic and Autonomous Systems (ICAS 2009). - : IEEE Computer Society. ; , s. 313-320
  • Konferensbidrag (refereegranskat)
  •  
47.
  • Dhariwal, Sumeet, 1989-, et al. (författare)
  • Clothing Classification using Unsupervised Pre-Training
  • 2020
  • Ingår i: 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 82-89
  • Konferensbidrag (refereegranskat)abstract
    • Deep Learning has changed the way computer vision tasks are being solved in recent times. Deep Learning based approaches have achieved outstanding results in computer vision tasks including image classification, segmentation, object detection. Most of this success has been achieved by training deep neural networks on labelled data. In general, the more labelled data is fed to a deep learning model, the more accurate the model will be. However, labelling is time consuming and sometimes even impossible.Fashion and e-commerce are domains where a large amount of unlabelled data is available. There is a huge need to leverage these data without labels. The aim of this paper is to explore and evaluate the possibility and effectiveness of using massive amount of unlabelled data to build deep learning models. We compare the performance of these models with the performance of models built with labelled data. Specifically, we compare fully supervised deep learning with two deep learning methods with unsupervised pre-training. Our pre-trainings are based on clustering of features called DeepCluster and rotation as a self-supervision task. The comparison is performed on the DeepFashion dataset.Our experimental results have shown that using unsupervised pre-training can attain comparable classification accuracy (~1-4 % difference) on image classification comparing to fully supervised models. Furthermore, we have shown that our models uses five times less labelled data during the fine-tuning phase and still achieves comparable accuracy (~3-4 % difference) comparing to fully supervised models. These results demonstrate the potential of using unsupervised pre-training approaches in achieving comparable results to fully supervised models.
  •  
48.
  • Doroshenko, Anatoly, et al. (författare)
  • Coordination models and facilities could be parallel software accelerators
  • 1999
  • Ingår i: HIGH-PERFORMANCE COMPUTING AND NETWORKING, PROCEEDINGS. - Berlin : Springer Berlin/Heidelberg. - 9783540658214 - 3540658211 ; , s. 1219-1222
  • Konferensbidrag (refereegranskat)abstract
    •  A  new  coordination  model  is  constructed  for  distributed shared  memory parallel programs.  It  exploits typing of shared resources and  formal specification of a  priori known  synchronization constraints.
  •  
49.
  • Fedeli, Stefano, et al. (författare)
  • Privacy Preserving Survival Prediction
  • 2021
  • Ingår i: 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 4600-4608
  • Konferensbidrag (refereegranskat)abstract
    • Predictive modeling has the potential to improve risk stratification of cancer patients and thereby contribute to optimized treatment strategies and better outcomes for patients in clinical practice. To develop robust predictive models for decision-making in healthcare, sensitive patient-level data is often required when developing the training models. Consequently, data privacy is an important aspect to consider when building these predictive models and in subsequent communication of the results. In this study we have used Graph Neural Networks for survival prediction, and compared the accuracy to state-of-the-art prediction models after applying Differential Privacy and k-Anonymity, i.e. two privacy-preservation solutions. By using two different data sources we demonstrated that Graph Neural Networks and Survival Forests are the two most well-performing survival prediction methods when used in combination with privacy preservation solutions. Furthermore, when the predictive model was built using clinical expertise in the specific area of interest, the prediction accuracy of the proposed knowledge based graph model drops by at most 10% when used with privacy preservation solutions. Our proposed knowledge based graph is therefore more suitable to be used in combination with privacy preservation solutions as compared to other graph models.
  •  
50.
  • Feigin, Valery L, et al. (författare)
  • Global, Regional, and Country-Specific Lifetime Risks of Stroke, 1990 and 2016.
  • 2018
  • Ingår i: The New England journal of medicine. - 1533-4406 .- 0028-4793. ; 379:25, s. 2429-2437
  • Tidskriftsartikel (refereegranskat)abstract
    • The lifetime risk of stroke has been calculated in a limited number of selected populations. We sought to estimate the lifetime risk of stroke at the regional, country, and global level using data from a comprehensive study of the prevalence of major diseases.We used the Global Burden of Disease (GBD) Study 2016 estimates of stroke incidence and the competing risks of death from any cause other than stroke to calculate the cumulative lifetime risks of first stroke, ischemic stroke, or hemorrhagic stroke among adults 25 years of age or older. Estimates of the lifetime risks in the years 1990 and 2016 were compared. Countries were categorized into quintiles of the sociodemographic index (SDI) used in the GBD Study, and the risks were compared across quintiles. Comparisons were made with the use of point estimates and uncertainty intervals representing the 2.5th and 97.5th percentiles around the estimate.The estimated global lifetime risk of stroke from the age of 25 years onward was 24.9% (95% uncertainty interval, 23.5 to 26.2); the risk among men was 24.7% (95% uncertainty interval, 23.3 to 26.0), and the risk among women was 25.1% (95% uncertainty interval, 23.7 to 26.5). The risk of ischemic stroke was 18.3%, and the risk of hemorrhagic stroke was 8.2%. In high-SDI, high-middle-SDI, and low-SDI countries, the estimated lifetime risk of stroke was 23.5%, 31.1% (highest risk), and 13.2% (lowest risk), respectively; the 95% uncertainty intervals did not overlap between these categories. The highest estimated lifetime risks of stroke according to GBD region were in East Asia (38.8%), Central Europe (31.7%), and Eastern Europe (31.6%), and the lowest risk was in eastern sub-Saharan Africa (11.8%). The mean global lifetime risk of stroke increased from 22.8% in 1990 to 24.9% in 2016, a relative increase of 8.9% (95% uncertainty interval, 6.2 to 11.5); the competing risk of death from any cause other than stroke was considered in this calculation.In 2016, the global lifetime risk of stroke from the age of 25 years onward was approximately 25% among both men and women. There was geographic variation in the lifetime risk of stroke, with the highest risks in East Asia, Central Europe, and Eastern Europe. (Funded by the Bill and Melinda Gates Foundation.).
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 178
Typ av publikation
konferensbidrag (127)
tidskriftsartikel (21)
doktorsavhandling (14)
annan publikation (6)
licentiatavhandling (6)
rapport (2)
visa fler...
bokkapitel (2)
visa färre...
Typ av innehåll
refereegranskat (150)
övrigt vetenskapligt/konstnärligt (28)
Författare/redaktör
Vlassov, Vladimir (83)
Vlassov, Vladimir, 1 ... (77)
Popov, Konstantin (20)
Liu, Ying (19)
Navarro, Leandro (15)
Al-Shishtawy, Ahmad (14)
visa fler...
Brand, Per (13)
Brorsson, Mats (11)
Payberah, Amir H., 1 ... (10)
Haridi, Seif (9)
Freitag, Felix (9)
Dowling, Jim (9)
Ayani, Rassul (9)
Al-Shishtawy, Ahmad, ... (8)
Vlassov, Vladimir, A ... (8)
Imtiaz, Sana (8)
Thorelli, Lars-Erik (8)
Wang, Tianze (8)
Haridi, Seif, 1953- (7)
Sheikholeslami, Sina ... (7)
Ayguade, Eduard (7)
Baig, Roger (7)
Peiro Sajjad, Hooman (7)
Abbas, Zainab (6)
Kalavri, Vasiliki (6)
Hay, Simon I. (6)
Feigin, Valery L. (6)
Jonas, Jost B. (6)
Kasaeian, Amir (6)
Lorkowski, Stefan (6)
Malekzadeh, Reza (6)
Mokdad, Ali H. (6)
Naghavi, Mohsen (6)
Qorbani, Mostafa (6)
Sepanlou, Sadaf G. (6)
Vos, Theo (6)
Yonemoto, Naohiro (6)
Murray, Christopher ... (6)
Bennett, Derrick A. (6)
Tabares-Seisdedos, R ... (6)
Yano, Yuichiro (6)
Venketasubramanian, ... (6)
Gupta, Rajeev (6)
Awan, Ahsan Javed, 1 ... (6)
Moll, Agusti (6)
Pueyo, Roger (6)
Hachinski, Vladimir (6)
Meretoja, Atte (6)
Fischer, Florian (6)
Barker-Collo, Suzann ... (6)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (168)
RISE (19)
Karolinska Institutet (6)
Umeå universitet (5)
Chalmers tekniska högskola (5)
Högskolan Dalarna (4)
visa fler...
Lunds universitet (3)
Uppsala universitet (2)
Göteborgs universitet (1)
Södertörns högskola (1)
visa färre...
Språk
Engelska (178)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (114)
Teknik (73)
Medicin och hälsovetenskap (7)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy