SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:2168 7161 OR L773:2372 0018 "

Search: L773:2168 7161 OR L773:2372 0018

  • Result 1-10 of 25
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Carlsson, Niklas, 1977-, et al. (author)
  • Optimized Dynamic Cache Instantiation and Accurate LRU Approximations under Time-varying Request Volume
  • 2023
  • In: IEEE Transactions on Cloud Computing. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2168-7161 .- 2372-0018. ; 11:1, s. 779-797
  • Journal article (peer-reviewed)abstract
    • Content-delivery applications can achieve scalability and reduce wide-area network traffic using geographically distributed caches. However, each deployed cache has an associated cost, and under time-varying request rates (e.g., a daily cycle) there may be long periods when the request rate from the local region is not high enough to justify this cost. Cloud computing offers a solution to problems of this kind, by supporting dynamic allocation and release of resources. In this paper, we analyze the potential benefits from dynamically instantiating caches using resources from cloud service providers. We develop novel analytic caching models that accommodate time-varying request rates, transient behavior as a cache fills following instantiation, and selective cache insertion policies. Within the context of a simple cost model, we then develop bounds and compare policies with optimized parameter selections to obtain insights into key cost/performance tradeoffs. We find that dynamic cache instantiation can provide substantial cost reductions, that potential reductions strongly dependent on the object popularity skew, and that selective cache insertion can be even more beneficial in this context than with conventional edge caches. Finally, our contributions also include accurate and easy-to-compute approximations that are shown applicable to LRU caches under time-varying workloads.
  •  
2.
  • Grambow, M., et al. (author)
  • Using Microbenchmark Suites to Detect Application Performance Changes
  • 2023
  • In: Ieee Transactions on Cloud Computing. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-7161 .- 2372-0018. ; 11:3, s. 2575-2590
  • Journal article (peer-reviewed)abstract
    • Software performance changes are costly and often hard to detect pre-release. Similar to software testing frameworks, either application benchmarks or microbenchmarks can be integrated into quality assurance pipelines to detect performance changes before releasing a new application version. Unfortunately, extensive benchmarking studies usually take several hours which is problematic when examining dozens of daily code changes in detail; hence, trade-offs have to be made. Optimized microbenchmark suites, which only include a small subset of the full suite, are a potential solution for this problem, given that they still reliably detect the majority of the application performance changes such as an increased request latency. It is, however, unclear whether microbenchmarks and application benchmarks detect the same performance problems and one can be a proxy for the other. In this paper, we explore whether microbenchmark suites can detect the same application performance changes as an application benchmark. For this, we run extensive benchmark experiments with both the complete and the optimized microbenchmark suites of the two time-series database systems InfluxDB and VictoriaMetrics and compare their results to the results of corresponding application benchmarks. We do this for 70 and 110 commits, respectively. Our results show that it is possible to detect application performance changes using an optimized microbenchmark suite if frequent false-positive alarms can be tolerated.
  •  
3.
  • Al-Dulaimy, Auday, et al. (author)
  • MultiScaler : A Multi-Loop Auto-Scaling Approach for Cloud-Based Applications
  • 2022
  • In: IEEE Transactions on Cloud Computing. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-7161. ; 10:4, s. 2769-2786
  • Journal article (peer-reviewed)abstract
    • Cloud computing offers a wide range of services through a pool of heterogeneous Physical Machines (PMs) hosted on cloud data centers, where each PM can host several Virtual Machines (VMs). Resource sharing among VMs comes with major benefits, but it can create technical challenges that have a detrimental effect on the performance. To ensure a specific service level requested by the cloud-based applications, there is a need for an approach to assign adequate resources to each VM. To this end, we present our novel Multi-Loop Control approach, called MultiScaler , to allocate resources to VMs based on the Service Level Agreement (SLA) requirements and the run-time conditions. MultiScaler is mainly composed of three different levels working closely with each other to achieve an optimal resource allocation. We propose a set of tailor-made controllers to monitor VMs and take actions accordingly to regulate contention among collocated VMs, to reallocate resources if required, and to migrate VMs from one PM to another. The evaluation in a VMware cluster have shown that the MultiScaler approach can meet applications performance goals and guarantee the SLA by assigning the exact resources that the applications require. Compared with sophisticated baselines, MultiScaler produces significantly better reaction to changes in workloads even under the presence of noisy neighbors.
  •  
4.
  • Alhamazani, Khalid, et al. (author)
  • Cross-Layer Multi-Cloud Real-Time Application QoS Monitoring and Benchmarking As-a-Service Framework
  • 2019
  • In: I E E E Transactions on Cloud Computing. - Los Alamitos : IEEE. - 2168-7161. ; 7:1, s. 48-61
  • Journal article (peer-reviewed)abstract
    • Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business critical applications that leverage various cloud platforms. Such applications hosted on single/multiple cloud provider platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). This paper proposes, develops and validates CLAMBS—Cross-Layer Multi Cloud Application Monitoring and Benchmarking as-a-Service for efficient QoS monitoring and benchmarking of cloud applications hosted on multi-clouds environments. The major highlight of CLAMBS is its capability of monitoring and benchmarking individual application components such as databases and web servers, distributed across cloud layers (*-aaS), spread among multiple cloud providers. We validate CLAMBS using prototype implementation and extensive experimentation and show that CLAMBS efficiently monitors and benchmarks application components on multi-cloud platforms including Amazon EC2 and Microsoft Azure.  
  •  
5.
  • Cai, H., et al. (author)
  • Model-Driven Development Patterns for Mobile Services in Cloud of Things
  • 2018
  • In: IEEE Transactions on Cloud Computing. - : IEEE. - 2168-7161. ; 6:3, s. 771-784
  • Journal article (peer-reviewed)abstract
    • Cloud of Things (CoT) is an integration of Internet of Things (IoT) and cloud computing for intelligent and smart application especially in mobile environment. Model Driven Architecture (MDA) is used to develop Software as a Service (SaaS) so as to facilitate mobile application development by relieving developers from technical details. However, traditional service composition or mashup are somewhat unavailable due to complex relations and heterogeneous deployed environments. For the purpose of building cloud-enabled mobile applications in a configurable and adaptive way, Model-Driven Development Patterns based on semantic reasoning mechanism are provided towards CoT application development. Firstly, a meta-model covering both multi-view business elements and service components are provided for model transformation. Then, based on formal representation of models, three patterns from different tiers of Model-View-Controller (MVC) framework are used to transform business models into service component system so as to configure cloud services rapidly. Lastly, a related software platform is also provided for verification. The result shows that the platform is applicable for rapid system development by means of various service integration patterns. 
  •  
6.
  • Champati, Jaya Prakash, et al. (author)
  • Delay and Cost Optimization in Computational Offloading Systems with Unknown Task Processing Times
  • 2021
  • In: IEEE Transactions on Cloud Computing. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-7161. ; 9:4, s. 1422-1438
  • Journal article (peer-reviewed)abstract
    • Computational offloading systems, where computational tasks can be processed locally or offloaded to a remote cloud, have become prevalent since the advent of cloud computing. The task scheduler in a computational offloading system decides both the selection of tasks to be offloaded to the remote cloud and the scheduling of tasks on the local processors. In this work, we consider the problem of minimizing a weighted sum of the makespan of the tasks and the offloading cost at the remote cloud. In contrast to prior works, we do not assume that the task processing times are known a priori. We show that the original problem can be solved by algorithms designed toward minimizing the maximum between the makespan and the weighted offloading cost, only with doubling of the competitive ratio. Furthermore, when the remote cloud is much faster than the local processors, the latter problem can be equivalently transformed into a makespan minimization problem with unrelated processors. For this case, we propose a Greedy-One-Restart (GOR) algorithm based on online estimation of the unknown processing times, and one-time cancellation and rescheduling of tasks that turn out to require long processing times. Given m local processors, we show that GOR has O(root m) competitive ratio, which is a substantial improvement over the best known algorithms in the literature. For the general case of arbitrary speed at the remote cloud, we extend GOR to a Greedy-Two-Restart (GTR) algorithm and show that it is O(root m)-competitive. Furthermore, where tasks arrive dynamically with unknown arrival times, we extend GOR and GTR to Dynamic-GOR (DGOR) and Dynamic-GTR (DGTR), respectively, and find their competitive ratios. Finally, we discuss how GOR can be extended to accommodate multiple remote processors. In addition to performance bounding by competitive ratios, our simulation results demonstrate that the proposed algorithms are favorable also in terms of average performance, in comparison with the well-known list scheduling algorithm and other alternatives.
  •  
7.
  • Espling, Daniel, 1983-, et al. (author)
  • Modeling and Placement of Cloud Services with Internal Structure
  • 2016
  • In: IEEE Transactions on Cloud Computing. - : IEEE Computer Society. - 2168-7161. ; 4:4, s. 429-439
  • Journal article (peer-reviewed)abstract
    • Virtual machine placement is the process of mapping virtual machines to available physical hosts within a datacenter or on a remote datacenter in a cloud federation. Normally, service owners cannot influence the placement of service components beyond choosing datacenter provider and deployment zone at that provider. For some services, however, this lack of influence is a hindrance to cloud adoption. For example, services that require specific geographical deployment (due e.g. to legislation), or require redundancy by avoiding co-location placement of critical components. We present an approach for service owners to influence placement of their service components by explicitly specifying service structure, component relationships, and placement constraints between components. We show how the structure and constraints can be expressed and subsequently formulated as constraints that can be used in placement of virtual machines in the cloud. We use an integer linear programming scheduling approach to illustrate the approach, show the corresponding mathematical formulation of the model, and evaluate it using a large set of simulated input. Our experimental evaluation confirms the feasibility of the model and shows how varying amounts of placement constraints and data center background load affects the possibility for a solver to find a solution satisfying all constraints within a certain time-frame. Our experiments indicate that the number of constraints affects the ability of finding a solution to a higher degree than background load, and that for a high number of hosts with low capacity, component affinity is the dominating factor affecting the possibility to find a solution.
  •  
8.
  • Gokan Khan, Michel, 1989-, et al. (author)
  • PerfSim : A Performance Simulator for Cloud Native Microservice Chains
  • 2023
  • In: IEEE Transactions on Cloud Computing. - : IEEE. - 2168-7161. ; :2, s. 1395-1413
  • Journal article (peer-reviewed)abstract
    • Cloud native computing paradigm allows microservice-based applications to take advantage of cloud infrastructure in a scalable, reusable, and interoperable way. However, in a cloud native system, the vast number of configuration parameters and highly granular resource allocation policies can significantly impact the performance and deployment cost of such applications. For understanding and analyzing these implications in an easy, quick, and cost-effective way, we present PerfSim, a discrete-event simulator for approximating and predicting the performance of cloud native service chains in user-defined scenarios. To this end, we proposed a systematic approach for modeling the performance of microservices endpoint functions by collecting and analyzing their performance and network traces. With a combination of the extracted models and user-defined scenarios, PerfSim can simulate the performance behavior of service chains over a given period and provides an approximation for system KPIs, such as requests' average response time. Using the processing power of a single laptop, we evaluated both simulation accuracy and speed of PerfSim in 104 prevalent scenarios and compared the simulation results with the identical deployment in a real Kubernetes cluster. We achieved ~81-99% simulation accuracy in approximating the average response time of incoming requests and ~16-1200 times speed-up factor for the simulation.
  •  
9.
  • Grambow, Martin, et al. (author)
  • Using Microbenchmark Suites to Detect Application Performance Changes
  • 2023
  • In: IEEE Transactions on Cloud Computing. - 2168-7161. ; 11:3, s. 2575-2590
  • Journal article (peer-reviewed)abstract
    • Software performance changes are costly and often hard to detect pre-release. Similar to software testing frameworks, either application benchmarks or microbenchmarks can be integrated into quality assurance pipelines to detect performance changes before releasing a new application version. Unfortunately, extensive benchmarking studies usually take several hours which is problematic when examining dozens of daily code changes in detail; hence, trade-offs have to be made. Optimized microbenchmark suites, which only include a small subset of the full suite, are a potential solution for this problem, given that they still reliably detect the majority of the application performance changes such as an increased request latency. It is, however, unclear whether microbenchmarks and application benchmarks detect the same performance problems and one can be a proxy for the other. In this paper, we explore whether microbenchmark suites can detect the same application performance changes as an application benchmark. For this, we run extensive benchmark experiments with both the complete and the optimized microbenchmark suites of two time-series database systems, i.e., InfluxDB and VictoriaMetrics, and compare their results to the results of corresponding application benchmarks. We do this for 70 and 110 commits, respectively. Our results show that it is not trivial to detect application performance changes using an optimized microbenchmark suite. The detection (i) is only possible if the optimized microbenchmark suite covers all application-relevant code sections, (ii) is prone to false alarms, and (iii) cannot precisely quantify the impact on application performance. For certain software projects, an optimized microbenchmark suite can, thus, provide fast performance feedback to developers (e.g., as part of a local build process), help estimating the impact of code changes on application performance, and support a detailed analysis while a daily application benchmark detects major performance problems. Thus, although a regular application benchmark cannot be substituted for both studied systems, our results motivate further studies to validate and optimize microbenchmark suites.
  •  
10.
  • Islam, Mohammad A., et al. (author)
  • Water-Constrained Geographic Load Balancing in Data Centers
  • 2017
  • In: IEEE Transactions on Cloud Computing. - : IEEE. - 2168-7161. ; 5:2, s. 208-220
  • Journal article (peer-reviewed)abstract
    • Spreading across many parts of the world and presently hard striking California, extended droughts could even potentially threaten reliable electricity production and local water supplies, both of which are critical for data center operation. While numerous efforts have been dedicated to reducing data centers’ energy consumption, the enormity of data centers’ water footprints is largely neglected and, if still left unchecked, may handicap service availability during droughts. In this paper, we propose a water-aware workload management algorithm, called WATCH (WATer-constrained workload sCHeduling in data centers), which caps data centers’ long-term water consumption by exploiting spatio-temporal diversities of water efficiency and dynamically dispatching workloads among distributed data centers. We demonstrate the effectiveness of WATCH both analytically and empirically using simulations: based on only online information, WATCH can result in a provably-low operational cost while successfully capping water consumption under a desired level. Our results also show that WATCH can cut water consumption by 20 percent while only incurring a negligible cost increase even compared to state-of-the-art cost-minimizing but water-oblivious solution. Sensitivity studies are conducted to validate WATCH under various settings.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 25
Type of publication
journal article (25)
Type of content
peer-reviewed (25)
Author/Editor
Vasilakos, Athanasio ... (4)
Taheri, Javid (3)
Kassler, Andreas, 19 ... (2)
Champati, Jaya Praka ... (2)
Al-Dulaimy, Auday (2)
Dán, György (2)
show more...
Liang, Ben (2)
Leitner, Philipp, 19 ... (2)
Tomas, Luis (2)
Carlsson, Niklas, 19 ... (1)
Zhou, J. (1)
Gehrmann, Christian (1)
Cai, H. (1)
Zomaya, Albert (1)
Xu, B (1)
Gu, Y. (1)
Ngai, Edith C.-H. (1)
HoseinyFarahabady, M ... (1)
Deng, Shuiguang (1)
Shu, Lei (1)
Vasilakos, Athanasio ... (1)
Khodaei, Mohammad (1)
Alhamazani, Khalid (1)
Ranjan, Rajiv (1)
Mitra, Karan (1)
Rabhi, Fethi (1)
Wang, Lizhe (1)
Jayaraman, Prem (1)
Liu, Chang (1)
Georgakopoulos, Dimi ... (1)
Tordsson, Johan, 198 ... (1)
Elmroth, Erik, 1964- (1)
Tordsson, Johan (1)
Elmroth, Erik (1)
Schelén, Olov (1)
Espling, Daniel, 198 ... (1)
Paladi, Nicolae (1)
Michalas, Antonis (1)
Papadimitratos, Pano ... (1)
Varvarigou, Theodora (1)
Bermbach, David (1)
Yan, Zheng (1)
Eager, Derek L. (1)
Li, He (1)
Larsson, Lars, 1983- (1)
Li, Wubin, 1983- (1)
Shakir, Muhammad Zee ... (1)
Gokan Khan, Michel, ... (1)
Grambow, M. (1)
Laaber, C. (1)
show less...
University
Luleå University of Technology (7)
Royal Institute of Technology (5)
Umeå University (4)
Karlstad University (3)
RISE (2)
University of Gothenburg (1)
show more...
Uppsala University (1)
Mälardalen University (1)
Linköping University (1)
Chalmers University of Technology (1)
show less...
Language
English (25)
Research subject (UKÄ/SCB)
Natural sciences (20)
Engineering and Technology (9)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view