SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Zomaya Albert) "

Sökning: WFRF:(Zomaya Albert)

  • Resultat 1-50 av 87
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Ahammed, Farhan, et al. (författare)
  • Finding lower bounds of localization with noisy measurements using genetic algorithms
  • 2011
  • Ingår i: Proceedings of the first ACM international symposium on Design and analysis of intelligent vehicular networks and applications (DIVANet '11). - Miami, Florida, USA : Association for Computing Machinery (ACM). - 9781450309042 ; , s. 47-54
  • Konferensbidrag (refereegranskat)abstract
    • Vehicular Ad-Hoc Networks (VANETs) are wireless networks with mobile nodes (vehicles) which connect in an ad-hoc manner. Many vehicles use the Global Positioning System (GPS) to provide their locations. However the inaccuracy of GPS devices leads to some vehicles incorrectly assuming they are located at different positions and sometimes on different roads. VANETs can be used to increase the accuracy of each vehicle's computed location by allowing vehicles to share information regarding the measured distances to neighbouring vehicles. This paper looks at finding how much improvement can be made given the erroneous measurements present in the system. An evolutionary algorithm is used to evolve instances of parameters used by the VLOCI2 algorithm, also presented in this paper, to find instances which minimises the inaccuracy in computed locations. Simulation results show a definite improvement in location accuracy and lower bounds on how much improvement is possible is inferred.
  •  
3.
  •  
4.
  •  
5.
  •  
6.
  •  
7.
  •  
8.
  • Anwar, Adnan, et al. (författare)
  • HPC-Based Intelligent Volt/VAr Control of Unbalanced Distribution Smart Grid in the Presence of Noise
  • 2017
  • Ingår i: IEEE Transactions on Smart Grid. - : IEEE. - 1949-3053 .- 1949-3061. ; 8:3, s. 1446-1459
  • Tidskriftsartikel (refereegranskat)abstract
    • The performance of Volt/VAr optimization has been significantly improved due to the integration of measurement data obtained from the advanced metering infrastructure of a smart grid. However, most of the existing works lack: 1) realistic unbalanced multi-phase distribution system modeling; 2) scalability of the Volt/VAr algorithm for larger test system; and 3) ability to handle gross errors and noise in data processing. In this paper, we consider realistic distribution system models that include unbalanced loadings and multi-phased feeders and the presence of gross errors such as communication errors and device malfunction, as well as random noise. At the core of the optimization process is an intelligent particle swarm optimization-based technique that is parallelized using high performance computing technique to solve Volt/VAr-based power loss minimization problem. Extensive experiments covering the different aspects of the proposed framework show significant improvement over existing Volt/VAr approaches in terms of both the accuracy and scalability on IEEE 123 node and a larger IEEE 8500 node benchmark test systems.
  •  
9.
  •  
10.
  •  
11.
  •  
12.
  • Calvo, J.C., et al. (författare)
  • A Method to Improve the Accuracy of Protein Torsion Angles
  • 2011
  • Ingår i: International Conference on Bioinformatics Models, Methods and Algorithms (Bioinformatics-2011). - Rome, Italy : SciTePress. - 9789898425362 ; , s. 297-300
  • Konferensbidrag (refereegranskat)
  •  
13.
  • Casas, Israel, et al. (författare)
  • A balanced scheduler with data reuse and replication for scientific workflows in cloud computing systems
  • 2017
  • Ingår i: Future generations computer systems. - : Elsevier. - 0167-739X .- 1872-7115. ; 74, s. 168-178
  • Tidskriftsartikel (refereegranskat)abstract
    • Cloud computing provides substantial opportunities to researchers who demand pay-as-you-go computing systems. Although cloud provider (e.g., Amazon Web Services) and application provider (e.g., biologists, physicists, and online gaming companies) both have specific performance requirements (e.g. application response time), it is the cloud scheduler’s responsibility to map the application to underlying cloud resources. This article presents a Balanced and file Reuse-Replication Scheduling (BaRRS) algorithm for cloud computing environments to optimally schedule scientific application workflows. BaRRS splits scientific workflows into multiple sub-workflows to balance system utilization via parallelization. It also exploits data reuse and replication techniques to optimize the amount of data that needs to be transferred among tasks at run-time. BaRRS analyzes the key application features (e.g., task execution times, dependency patterns and file sizes) of scientific workflows for adapting existing data reuse and replication techniques to cloud systems. Further, BaRRS performs a trade-off analysis to select the optimal solution based on two optimization constraints: execution time and monetary cost of running scientific workflows. BaRRS is compared with a state-of-the-art scheduling approach; experiments prove its superior performance. Experiments include four well known scientific workflows with different dependency patterns and data file sizes. Results were promising and also highlighted most critical factors affecting execution of scientific applications on clouds. 
  •  
14.
  •  
15.
  • Casas, Israel, et al. (författare)
  • PSO-DS : a scheduling engine for scientific workflow managers
  • 2017
  • Ingår i: Journal of Supercomputing. - : Springer Science and Business Media LLC. - 0920-8542 .- 1573-0484. ; 73:9, s. 3924-3947
  • Tidskriftsartikel (refereegranskat)abstract
    • Cloud computing, an important source of computing power for the scientific community, requires enhanced tools for an efficient use of resources. Current solutions for workflows execution lack frameworks to deeply analyze applications and consider realistic execution times as well as computation costs. In this study, we propose cloud user-provider affiliation (CUPA) to guide workflow's owners in identifying the required tools to have his/her application running. Additionally, we develop PSO-DS, a specialized scheduling algorithm based on particle swarm optimization. CUPA encompasses the interaction of cloud resources, workflow manager system and scheduling algorithm. Its featured scheduler PSO-DS is capable of converging strategic tasks distribution among resources to efficiently optimize makespan and monetary cost. We compared PSO-DS performance against four well-known scientific workflow schedulers. In a test bed based on VMware vSphere, schedulers mapped five up-to-date benchmarks representing different scientific areas. PSO-DS proved its efficiency by reducing makespan and monetary cost of tested workflows by 75 and 78%, respectively, when compared with other algorithms. CUPA, with the featured PSO-DS, opens the path to develop a full system in which scientific cloud users can run their computationally expensive experiments.
  •  
16.
  • Cho, Daewoong, et al. (författare)
  • Big Data helps SDN to optimize its controllers
  • 2018. - 1
  • Ingår i: Big Data and Software Defined Networks. - London : IET Digital Library. - 9781785613043 - 9781785613050 ; , s. 389-408
  • Bokkapitel (refereegranskat)abstract
    • In this chapter, we first discuss the basic features and recent issues of the SDN control plane, notably the controller element. Then, we present feasible ideas to address the SDN controller-related problems using Big Data analytics techniques. Accordingly, we propose that Big Data can help various aspects of the SDN controller to address scalability issue and resiliency problem. Furthermore, we proposed six applicable scenarios for optimizing the SDN controller using the Big Data analytics: (i) controller scale-up/out against network traffic concentration, (ii) controller scale-in for reduced energy usage, (iii) backup controller placement for fault tolerance and high availability, (iv) creating backup paths to improve fault tolerance, (v) controller placement for low latency between controllers and switches, and (vi) flow rule aggregation to reduce the SDN controller's traffic. Although real-world practices on optimizing SDN controllers using Big Data are absent in the literature, we expect scenarios we highlighted in this chapter to be highly applicable to optimize the SDN controller in the future.
  •  
17.
  • Cho, Daewoong, et al. (författare)
  • Virtual Network Function Placement : Towards Minimizing Network Latency and Lead Time
  • 2017
  • Ingår i: 2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom). - Piscataway : IEEE. - 9781538619933 - 9781538619940 ; , s. 90-97
  • Konferensbidrag (refereegranskat)abstract
    • Network Function Virtualization (NFV) is an emerging network architecture to increase flexibility and agility within operator's networks by placing virtualized services on demand in Cloud data centers (CDCs). One of the main challenges for the NFV environment is how to minimize network latency in the rapidly changing network environments. Although many researchers have already studied in the field of Virtual Machine (VM) migration and Virtual Network Function (VNF) placement for efficient resource management in CDCs, VNF migration problem for low network latency among VNFs has not been studied yet to the best of our knowledge. To address this issue in this article, we i) formulate the VNF migration problem and ii) develop a novel VNF migration algorithm called VNF Real-time Migration (VNF-RM) for lower network latency in dynamically changing resource availability. As a result of experiments, the effectiveness of our algorithm is demonstrated by reducing network latency by up to 70.90% after latency-aware VNF migrations.
  •  
18.
  • Demirbaga, Umit, et al. (författare)
  • AutoDiagn : An Automated Real-time Diagnosis Framework for Big Data Systems
  • 2022
  • Ingår i: IEEE Transactions on Computers. - USA : IEEE. - 0018-9340 .- 1557-9956. ; 71:5, s. 1035-1048
  • Tidskriftsartikel (refereegranskat)abstract
    • Big data processing systems, such as Hadoop and Spark, usually work on large-scale, highly-concurrent, and multi-tenant environments that can easily cause hardware and software malfunctions or failures, thereby leading to performance degradation. Several systems and methods exist to detect big data processing systems' performance degradation, perform root-cause analysis, and even overcome the issues causing such degradation. However, these solutions focus on specific problems such as straggler and inefficient resource utilization. There is a lack of a generic and extensible framework to support the real-time diagnosis of big data systems. In this paper, we propose, develop and validate AutoDiagn. This generic and flexible framework provides holistic monitoring of a big data system while detecting performance degradation and enabling root-cause analysis. We present the implementation and evaluation of AutoDiagn that interacts with a Hadoop cluster deployed on a public cloud and tested with real-world benchmark applications. Experimental results show that AutoDiagn has a small resource footprint, high throughput and low latency.
  •  
19.
  • Deng, Shuiguang, et al. (författare)
  • Composition-Driven IoT Service Provisioning in Distributed Edges
  • 2018
  • Ingår i: IEEE Access. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2169-3536. ; 6, s. 54258-54269
  • Tidskriftsartikel (refereegranskat)abstract
    • The increasing number of Internet of Thing (IoT) devices and services makes it convenient for people to sense the real world and makes optimal decisions or complete complex tasks with them. However, the latency brought by unstable wireless networks and computation failures caused by constrained resources limit the development of IoT. A popular approach to solve this problem is to establish an IoT service provision system based on a mobile edge computing (MEC) model. In the MEC model, plenty of edge servers are placed with access points via wireless networks. With the help of cached services on edge servers, the latency can be reduced, and the computation can be offloaded. The cache services must be carefully selected so that many requests can by satisfied without overloading resources in edge servers. This paper proposes an optimized service cache policy by taking advantage of the composability of services to improve the performance of service provision systems. We conduct a series of experiments to evaluate the performance of our approach. The result shows that our approach can improve the average response time of these IoT services.
  •  
20.
  •  
21.
  • Deng, Shuiguang, et al. (författare)
  • Cost Performance Driven Service Mashup : A Developer Perspective
  • 2016
  • Ingår i: IEEE Transactions on Parallel and Distributed Systems. - OS ALAMITOS, CA 90720-1314 USA : IEEE Computer Society. - 1045-9219 .- 1558-2183. ; 27:8, s. 2234-2247
  • Tidskriftsartikel (refereegranskat)abstract
    • Service mashups are applications created by combining single-functional services (or APIs) dispersed over the web. With the development of cloud computing and web technologies, service mashups are becoming more and more widely used and a large number of mashup platforms have been produced. However, due to the proliferation of services on the web, how to select component services to create mashups has become a challenging issue. Most developers pay more attention to the QoS (quality of service) and cost of services. Beside service selection, mashup deployment is another pivotal process, as the platform can significantly affect the quality of mashups. In this paper, we focus on creating service mashups from the perspective of developers. A genetic algorithm-based method, GA4MC (genetic algorithm for mashup creation), is proposed to select component services and deployment platforms in order to create service mashups with optimal cost performance. A series of experiments are conducted to evaluate the performance of GA4MC. The results show that the GA4MC method can achieve mashups whose cost performance is extremely close to the optimal . Moreover, the execution time of GA4MC is in a low order of magnitude and the algorithm performs good scalability as the experimental scale increases.
  •  
22.
  • Deng, Shuiguang, et al. (författare)
  • Dynamical Resource Allocation in Edge for Trustable Internet-of-Things Systems : A Reinforcement Learning Method
  • 2020
  • Ingår i: IEEE Transactions on Industrial Informatics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1551-3203 .- 1941-0050. ; 16:9, s. 6103-6113
  • Tidskriftsartikel (refereegranskat)abstract
    • Edge computing (EC) is now emerging as a key paradigm to handle the increasing Internet-of-Things (IoT) devices connected to the edge of the network. By using the services deployed on the service provisioning system which is made up of edge servers nearby, these IoT devices are enabled to fulfill complex tasks effectively. Nevertheless, it also brings challenges in trustworthiness management. The volatile environment will make it difficult to comply with the service-level agreement (SLA), which is an important index of trustworthiness declared by these IoT services. In this article, by denoting the trustworthiness gain with how well the SLA can comply, we first encode the state of the service provisioning system and the resource allocation scheme and model the adjustment of allocated resources for services as a Markov decision process (MDP). Based on these, we get a trained resource allocating policy with the help of the reinforcement learning (RL) method. The trained policy can always maximize the services' trustworthiness gain by generating appropriate resource allocation schemes dynamically according to the system states. By conducting a series of experiments on the YouTube request dataset, we show that the edge service provisioning system using our approach has 21.72% better performance at least compared to baselines.
  •  
23.
  • Deng, Shuiguang, et al. (författare)
  • Mobility-Aware Service Composition in Mobile Communities
  • 2016
  • Ingår i: IEEE Transactions on Systems, Man & Cybernetics. Systems. - 2168-2216 .- 2168-2232. ; 47:3, s. 555-568
  • Tidskriftsartikel (refereegranskat)abstract
    • he advances in mobile technologies enable mobile devices to perform tasks that are traditionally run by personal computers as well as provide services to the others. Mobile users can form a service sharing community within an area by using their mobile devices. This paper highlights several challenges involved in building such service compositions in mobile communities when both service requesters and providers are mobile. To deal with them, we first propose a mobile service provisioning architecture named a mobile service sharing community and then propose a service composition approach by utilizing the Krill-Herd algorithm. To evaluate the effectiveness and efficiency of our approach, we build a simulation tool. The experimental results demonstrate that our approach can obtain superior solutions as compared with current standard composition methods in mobile environments. It can yield near-optimal solutions and has a nearly linear complexity with respect to a problem size.
  •  
24.
  • Deng, Shuiguang, et al. (författare)
  • Optimal Application Deployment in Resource Constrained Distributed Edges
  • 2021
  • Ingår i: IEEE Transactions on Mobile Computing. - : IEEE Computer Society. - 1536-1233 .- 1558-0660. ; 20:5, s. 1907-1923
  • Tidskriftsartikel (refereegranskat)abstract
    • The dramatically increasing of mobile applications make it convenient for users to complete complex tasks on their mobile devices. However, the latency brought by unstable wireless networks and the computation failures caused by constrained resources limit the development of mobile computing. A popular approach to solve this problem is to establish a mobile service provisioning system based on a mobile edge computing (MEC) paradigm. In the MEC paradigm, plenty of machines are placed at the edge of the network so that the performance of applications can be optimized by using the involved microservice instances deployed on them. In this paper, we explore the deployment problem of microserivce-based applications in the MEC environment and propose an approach to help to optimize the cost of application deployment with the constraints of resources and the requirement of performance. We conduct a series of experiments to evaluate the performance of our approach. The result shows that our approach can improve the average response time of mobile services.
  •  
25.
  •  
26.
  • Dillon, Tharam, et al. (författare)
  • Message from U-Science 2014 general chairs
  • 2014
  • Ingår i: 2014 IEEE 12th International Conference on Dependable, Autonomic and Secure Computing. - Piscataway : IEEE. - 9781479950799 - 9781479950782
  • Konferensbidrag (populärvet., debatt m.m.)abstract
    • Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.© 2014 IEEE
  •  
27.
  •  
28.
  • Handbook of Integration of Cloud Computing, Cyber Physical Systems and Internet of Things
  • 2020
  • Samlingsverk (redaktörskap) (refereegranskat)abstract
    • This handbook covers recent advances in the integration of three areas, namely, cloud computing, cyber-physical systems, and the Internet of things which is expected to have a tremendous impact on our daily lives. It contains a total of thirteen peer-reviewed and edited chapters. This book covers topics such as context-aware cyber-physical systems, sustainable cloud computing, fog computing, and cloud monitoring; both the theoretical and practical aspects belonging to these topics are discussed. All the chapters also discuss open research challenges in the areas mentioned above. Finally, the handbook presents three use cases regarding healthcare, smart buildings and disaster management to assist the audience in understanding how to develop next-generation IoT- and cloud-enabled cyber-physical systems. This timely handbook is edited for students, researchers, as well as professionals who are interested in the rapidly growing fields of cloud computing, cyber-physical systems, and the Internet of things.
  •  
29.
  • Hoseiny Farahabady, Mohammad Reza, et al. (författare)
  • A Dynamic Resource Controller for a Lambda Architecture
  • 2017
  • Ingår i: 2017 46th International Conference on Parallel Processing (ICPP). - Piscataway : IEEE. - 9781538610428 - 9781538610435 ; , s. 332-341
  • Konferensbidrag (refereegranskat)abstract
    • Lambda architecture is a novel event-driven serverless paradigm that allows companies to build scalable and reliable enterprise applications. As an attractive alternative to traditional service oriented architecture (SOA), Lambda architecture can be used in many use cases including BI tools, in-memory graph databases, OLAP, and streaming data processing. In practice, an important aim of Lambda's service providers is devising an efficient way to co-locate multiple Lambda functions with different attributes into a set of available computing resources. However, previous studies showed that consolidated workloads can compete fiercely for shared resources, resulting in severe performance variability/degradation. This paper proposes a resource allocation mechanism for a Lambda platform based on the model predictive control framework. Performance evaluation is carried out by comparing the proposed solution with multiple resource allocation heuristics, namely enhanced versions of spread and binpack, and best-effort approaches. Results confirm that the proposed controller increases the overall resource utilization by 37% on average and achieves a significant improvement in preventing QoS violation incidents compared to others.
  •  
30.
  • Hoseiny Farahabady, M. Reza, et al. (författare)
  • Data-Intensive Workload Consolidation in Serverless (Lambda/FaaS) Platforms
  • 2021
  • Ingår i: 2021 IEEE 20TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665495509 ; , s. 1-8
  • Konferensbidrag (refereegranskat)abstract
    • A significant amount of research studies in the past years has been devoted on developing efficient mechanisms to control the level of degradation among consolidate workloads in a shared platform. Workload consolidation is a promising feature that is employed by most service providers to reduce the total operating costs in traditional computing systems [1]-[3]. Serverless paradigm - also known as Function as a Service, FaaS, and Lambda - recently emerged as a new virtualization run-time model that disentangles the traditional state of applications' users from the burden of provisioning physical computing resources, leaving the difficulty of providing the adequate resource capacity on the service provider's side. This paper focuses on a number of challenges associated with workload consolidation when a serverless platform is expected to execute several data-intensive functional units. Each functional unit is considered to be the atomic component that reacts to a stream of input data. A serverless application in the proposed model is composed of a series of functional units. Through a systematic approach, we highlight the main challenges for devising an efficient workload consolidation process in a data-intensive serverless platform. To this end, we first study the performance interference among multiple workloads to obtain the capacity of last level cache (LLC). We show how such contention among workloads can lead to a significant throughput degradation on a single physical server. We expand our investigation into a general case with the aim to prevent the total throughput never falling below a predefined utilization level. Based on the empirical results, we develop a consolidation model and then design a computationally efficient controller to optimize the throughput degradation among a platform consists fs multiple machines. The performance evaluation is conducted using modern workloads inspired by data management services, and data analytic benchmark tools in our in-house four node platform showing the efficiency of the proposed solution to mitigate the QoS violation rate for high priority applications by 90% while can enhance the normalized throughput usage of disk devices by 39%.
  •  
31.
  • HoseinyFarahabady, MohammadReza, et al. (författare)
  • Dynamic Control of CPU Cap Allocations in Stream Processing and Data-Flow Platforms
  • 2019
  • Ingår i: 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA). - : IEEE. - 9781728125220 ; , s. 339-346
  • Konferensbidrag (refereegranskat)abstract
    • This paper focuses on Timely dataflow programming model for processing streams of data. We propose a technique to define CPU resource allocation (i.e., CPU capping) with the goal to improve response time latency in such type of applications with different quality of service (QoS) level, as they are concurrently running in a shared multi-core computing system with unknown and volatile demand. The proposed solution predicts the expected performance of the underlying platform using an online approach based on queuing theory and adjusts the corrections required in CPU allocation to achieve the most optimized performance. The experimental results confirms that measured performance of the proposed model is highly accurate while it takes into account the percentiles on the QoS metrics. The theoretical model used for elastic allocation of CPU share in the target platform takes advantage of design principals in model predictive control theory and dynamic programming to solve an optimization problem. While the prediction module in the proposed algorithm tries to predict the temporal changes in the arrival rate of each data flow, the optimization module uses a system model to estimate the interference among collocated applications by continuously monitoring the available CPU utilization in individual nodes along with the number of outstanding messages in every intermediate buffer of all TDF applications. The optimization module eventually performs a cost-benefit analysis to mitigate the total amount of QoS violation incidents by assigning the limited CPU shares among collocated applications. The proposed algorithm is robust (i.e., its worst-case output is guaranteed for arbitrarily volatile incoming demand coming from different data streams), and if the demand volatility is not large, the output is optimal, too. Its implementation is done using the TDF framework in Rust for distributed and shared memory architectures. The experimental results show that the proposed algorithm reduces the average and p99 latency of delay-sensitive applications by 21% and 31.8%, respectively, while can reduce the amount of QoS violation incidents by 98% on average.
  •  
32.
  • HoseinyFarahabady, MohammadReza, et al. (författare)
  • Energy efficient resource controller for Apache Storm
  • 2023
  • Ingår i: Concurrency and Computation. - : John Wiley & Sons. - 1532-0626 .- 1532-0634. ; 35:17
  • Tidskriftsartikel (refereegranskat)abstract
    • Apache Storm is a distributed processing engine that can reliably process unbounded streams of data for real-time applications. While recent research activities mostly focused on devising a resource allocation and task scheduling algorithm to satisfy high performance or low latency requirements of Storm applications across a distributed and multi-core system, finding a solution that can optimize the energy consumption of running applications remains an important research question to be further explored. In this article, we present a controlling strategy for CPU throttling that continuously optimize the level of consumed energy of a Storm platform by adjusting the voltage and frequency of the CPU cores while running the assigned tasks under latency constraints defined by the end-users. The experimental results running over a Storm cluster with 4 physical nodes (total 24 cores) validates the effectiveness of proposed solution when running multiple compute-intensive operations. In particular, the proposed controller can keep the latency of analytic tasks, in terms of 99th latency percentile, within the quality of service requirement specified by the end-user while reducing the total energy consumption by 18% on average across the entire Storm platform.
  •  
33.
  • HoseinyFarahabady, MohammadReza, et al. (författare)
  • Enhancing disk input output performance in consolidated virtualized cloud platforms using a randomized approximation scheme
  • 2022
  • Ingår i: Concurrency and Computation. - : John Wiley & Sons. - 1532-0626 .- 1532-0634. ; 34:2
  • Tidskriftsartikel (refereegranskat)abstract
    • In a virtualized computer system with shared resources, consolidated virtual services (VSs) fiercely compete with each other to obtain the required capacity of resources, and this causes significant system's performance degradation. The performance of input output (I/O)-bound applications running inside their own VS is mainly determined by the total time required to schedule every read/write request, plus the actual time needed by the device driver to complete the request. To achieve a right performance isolation of shared resources (e.g., the last level cache, memory bandwidth, and the disk buffer), it is essential to limit the performance degradation level among collocated applications, as simultaneously several I/O operations are requested by VSs, perhaps with different priorities. This article proposes a resource allocation controller that uses a fully polynomial-time randomized approximation scheme to enable performance isolation of concurrent I/O requests in a shared system with multiple consolidated VSs. This controller uses a Monte Carlo sampling approach to measure and estimate the unknown attributes of operational requests originating from each VS. This is formalized as an optimization problem with the aim to minimize the degree of total quality of service (QoS) violation incidents in the entire platform. We associated a reward function to every working machine that represents the fulfillment degree of quality of service metric among all running VSs. The conducted comprehensive set of experiments showed that the proposed algorithm can reduce the QoS violation incidents by 32%, compared with the result which is obtained by employing the default resource allocation policy embedded in the existing Linux container layer.
  •  
34.
  • HoseinyFarahabady, M. Reza, et al. (författare)
  • Low Latency Execution Guarantee Under Uncertainty in Serverless Platforms
  • 2022
  • Ingår i: Parallel and Distributed Computing, Applications and Technologies. PDCAT 2021. - Cham : Springer. - 9783030967727 - 9783030967710 ; , s. 324-335
  • Konferensbidrag (refereegranskat)abstract
    • Serverless computing recently emerged as a new run-time paradigm to disentangle the client from the burden of provisioning physical computing resources, leaving such difficulty on the service provider's side. However, an unsolved problem in such an environment is how to cope with the challenges of executing several co-running applications while fulfilling the requested Quality of Service (QoS) level requested by all application owners. In practice, developing an efficient mechanism to reach the requested performance level (such as p-99 latency and throughput) is limited to the awareness (resource availability, performance interference among consolidation workloads, etc.) of the controller about the dynamics of the underlying platforms. In this paper, we develop an adaptive feedback controller for coping with the buffer instability of serverless platforms when several collocated applications are run in a shared environment. The goal is to support a low-latency execution by managing the arrival event rate of each application when shared resource contention causes a significant throughput degradation among workloads with different priorities. The key component of the proposed architecture is a continues management of server-side internal buffers for each application to provide a low-latency feedback control mechanism based on the requested QoS level of each application (e.g., buffer information) and the worker nodes throughput. The empirical results confirm the response stability for high priority workloads when a dynamic condition is caused by low priority applications. We evaluate the performance of the proposed solution with respect to the response time and the QoS violation rate for high priority applications in a serverless platform with four worker nodes set up in our in-house virtualized cluster. We compare the proposed architecture against the default resource management policy in Apache OpenWhisk which is extensively used in commercial serverless platforms. The results show that our approach achieves a very low overhead (less than 0.7%) while it can improve the p-99 latency of high priority applications by 64%, on average, in the presence of dynamic high traffic conditions.
  •  
35.
  • HoseinyFarahabady, M. Reza, et al. (författare)
  • QSpark : Distributed Execution of Batch & Streaming Analytics in Spark Platform
  • 2021
  • Ingår i: 2021 IEEE 20TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA). - : IEEE. - 9781665495509
  • Konferensbidrag (refereegranskat)abstract
    • A significant portion of research work in the past decade has been devoted on developing resource allocation and task scheduling solutions for large-scale data processing platforms. Such algorithms are designed to facilitate deployment of data analytic applications across either conventional cluster computing systems or modern virtualized data-centers. The main reason for such a huge research effort stems from the fact that even a slight improvement in the performance of such platforms can bring a considerable monetary savings for vendors, especially for modern data processing engines that are designed solely to perform high throughput or/and low-latency computations over massive-scale batch or streaming data. A challenging question to be yet answered in such a context is to design an effective resource allocation solution that can prevent low resource utilization while meeting the enforced performance level (such as 99-th latency percentile) in circumstances where contention among applications to obtain the capacity of shared resources is a non negligible performance-limiting parameter. This paper proposes a resource controller system, called QSpark, to cope with the problem of (i) low performance (i.e., resource utilization in the batch mode and p-99 response time in the streaming mode), and (ii) the shared resource interference among collocated applications in a multi-tenancy modern Spark platform. The proposed solution leverages a set of controlling mechanisms for dynamic partitioning of the allocation of computing resources, in a way that it can fulfill the QoS requirements of latency-critical data processing applications, while enhancing the throughput for all working nodes without reaching their saturation points. Through extensive experiments in our in-house Spark cluster, we compared the achieved performance of proposed solution against the default Spark resource allocation policy for a variety of Machine Learning (ML), Artificial Intelligence (AI), and Deep Learning (DL) applications. Experimental results show the effectiveness of the proposed solution by reducing the p-99 latency of high priority applications by 32% during the burst traffic periods (for both batch and stream modes), while it can enhance the QoS satisfaction level by 65% for applications with the highest priority (compared with the results of default Spark resource allocation strategy).
  •  
36.
  • Hoseinyfarahabady, M. Reza, et al. (författare)
  • Toward designing a dynamic CPU cap manager for timely dataflow platforms
  • 2018
  • Ingår i: HPC '18 Proceedings of the High Performance Computing Symposium. - : Association for Computing Machinery (ACM). - 9781510860162 ; , s. 60-70
  • Konferensbidrag (refereegranskat)abstract
    • In this work, we propose a control-based solution for the problem of CPU resource allocation in data-flow platform that considers the degradation of performance caused by running concurrent data-flow processes. Our aim is to cut the QoS violation incidents for applications belonging to the highest QoS class. The performance of the proposed solution is bench-marked with the famous round robin algorithm. The experimental results confirms that the proposed algorithm can decrease the latency of processing data records for applications by 48% compared to the round robin policy.
  •  
37.
  •  
38.
  • Lee, Young Choon, et al. (författare)
  • A Parallel Metaheuristic Framework Based on Harmony Search for Scheduling in Distributed Computing Systems
  • 2012
  • Ingår i: International Journal of Foundations of Computer Science. - : WORLD SCIENTIFIC PUBL CO PTE LTD. - 0129-0541. ; 23:2, s. 445-464
  • Tidskriftsartikel (refereegranskat)abstract
    • A large number of optimization problems have been identified as computationally challenging and/or intractable to solve within a reasonable amount of time. Due to the NP-hard nature of these problems, in practice, heuristics account for the majority of existing algorithms. Metaheuristics are one very popular type of heuristics used for many of these optimization problems. In this paper, we present a novel parallel-metaheuristic framework, which effectively enables to devise parallel metaheuristics, particularly with heterogeneous metaheuristics. The core component of the proposed framework is its harmony-search-based coordinator. Harmony search is a recent breed of metaheuristic that mimics the improvisation process of musicians. The coordinator facilitates heterogeneous metaheuristics (forming a parallel metaheuristic) to escape local optima. Specifically, best solutions generated by these worker metaheuristics are maintained in the harmony memory of the coordinator, and they are used to form new-possibly better-harmonies (solutions) before actual solution sharing between workers occurs; hence, their solutions are harmonized with each other. For the applicability validation and the performance evaluation, we have implemented a parallel hybrid metaheuristic using the framework for the task scheduling problem on multiprocessor computing systems (e.g., computer clusters). Experimental results verify that the proposed framework is a compelling approach to parallelize heterogeneous metaheuristics.
  •  
39.
  •  
40.
  • Li, Zheng, et al. (författare)
  • Towards Understanding the Runtime Configuration Management of Do-It-Yourself Content Delivery Network Applications over Public Clouds
  • 2014
  • Ingår i: Future generations computer systems. - : Elsevier BV. - 0167-739X .- 1872-7115. ; 37, s. 297-308
  • Tidskriftsartikel (refereegranskat)abstract
    • Cloud computing is a new paradigm shift which enables applications and related content (audio, video, text, images, etc.) to be provisioned in an on-demand manner and being accessible to anyone anywhere in the world without the need for owning expensive computing and storage infrastructures. Interactive multimedia content-driven applications in the domains of healthcare, aged-care, and education have emerged as one of the new classes of big data applications. This new generation of applications need to support complex content operations including production, deployment, consumption, personalisation, and distribution. However, to efficiently provision these applications on the Cloud data centres, there is a need to understand their run-time resource configurations. For example: (i) where to store and distribute the content to and from driven by end-user Service Level Agreements (SLAs)? (ii) how many content distribution servers to provision? and (iii) what Cloud VM configuration (number of instances, types, speed, etc.) to provision? In this paper, we present concepts and factors related to engineering such content-driven applications over public Clouds. Based on these concepts and factors, we propose a performance evaluation methodology for quantifying and understanding the runtime configuration these classes of applications. Finally, we conduct several benchmark driven experiments for validating the feasibility of the proposed methodology.
  •  
41.
  •  
42.
  • Mendes, Reginaldo, et al. (författare)
  • Using Semantic Web to Build and Execute Ad-Hoc Processes
  • 2011
  • Ingår i: IEEE/ACS International Conference on Computer Systems and Applications (AICCSA-2011). - : IEEE Press. - 9781457704765 ; , s. 233-240
  • Konferensbidrag (refereegranskat)
  •  
43.
  •  
44.
  •  
45.
  • Oljira, Dejene Boru, et al. (författare)
  • Energy-Efficient Data Replication in Cloud Computing Datacenters
  • 2013
  • Ingår i: 2013 IEEE GLOBECOM WORKSHOPS (GC WKSHPS). - : IEEE. - 9781479928514 ; , s. 446-451
  • Konferensbidrag (refereegranskat)abstract
    • Cloud computing is an emerging paradigm that provides computing resources as a service over a network. Communication resources often become a bottleneck in service provisioning for many cloud applications. Therefore, data replication, which brings data (e.g., databases) closer to data consumers (e.g., cloud applications), is seen as a promising solution. It allows minimizing network delays and bandwidth usage. In this paper we study data replication in cloud computing data centers. Unlike other approaches available in the literature, we consider both energy efficiency and bandwidth consumption of the system, in addition to the improved Quality of Service (QoS) as a result of the reduced communication delays. The evaluation results obtained during extensive simulations help to unveil performance and energy efficiency tradeoffs and guide the design of future data replication solutions.
  •  
46.
  • Oljira, Dejene Boru, et al. (författare)
  • Models for Efficient Data Replication in Cloud Computing Datacenters
  • 2015
  • Ingår i: 2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC). - : IEEE. - 9781467364324 ; , s. 6056-6061
  • Konferensbidrag (refereegranskat)abstract
    • Cloud computing is a computing model where users access ICT services and resources without regard to where the services are hosted. Communication resources often become a bottleneck in service provisioning for many cloud applications. Therefore, data replication which brings data (e.g., databases) closer to data consumers (e.g., cloud applications) is seen as a promising solution. In this paper, we present models for energy consumption and bandwidth demand of database access in cloud computing datacenter. In addition, we propose an energy efficient replication strategy based on the proposed models, which results in improved Quality of Service (QoS) with reduced communication delays. The evaluation results obtained with extensive simulations help to unveil performance and energy efficiency tradeoffs and guide the design of future data replication solutions.
  •  
47.
  • Ramezani, Fahimeh, et al. (författare)
  • A Multi-Objective Load Balancing System for Cloud Environments
  • 2017
  • Ingår i: Computer journal. - UK : Oxford University Press. - 0010-4620 .- 1460-2067. ; 60:9, s. 1316-1337
  • Tidskriftsartikel (refereegranskat)abstract
    • Virtual machine (VM) live migration has been applied to system load balancing in cloud environments for the purpose of minimizing VM downtime and maximizing resource utilization. However, the migration process is both time- and cost-consuming as it requires the transfer of large size files or memory pages and consumes a huge amount of power and memory for the origin and destination physical machine (PM), especially for storage VM migration. This process also leads to VM downtime or slowdown. To deal with these shortcomings, we develop a Multi-objective Load Balancing (MO-LB) system that avoids VM migration and achieves system load balancing by transferring extra workload from a set of VMs allocated on an overloaded PM to other compatible VMs in the cluster with greater capacity. To reduce the time factor even more and optimize load balancing over a cloud cluster, MO-LB contains a CPU Usage Prediction (CUP) sub-system. The CUP not only predicts the performance of the VMs but also determines a set of appropriate VMs with the potential to execute the extra workload imposed on the VMs of an overloaded PM. We also design a Multi-Objective Task Scheduling optimization model using Particle Swarm Optimization to migrate the extra workload to the compatible VMs. The proposed method is evaluated using a VMware-vSphere-based private cloud in contrast to the VM migration technique applied by vMotion. The evaluation results show that the MO-LB system dramatically increases VM performance while reducing service response time, memory usage, job makespan, power consumption and the time taken for the load balancing process.
  •  
48.
  •  
49.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 87

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy