SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Mehta Amardeep) "

Sökning: WFRF:(Mehta Amardeep)

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ali-Eldin, Ahmed, 1985-, et al. (författare)
  • How will your workload look like in 6 years? : Analyzing Wikimedia's workload
  • 2014
  • Ingår i: Proceedings of the 2014 IEEE International Conference on Cloud Engineering (IC2E 2014). - : IEEE Computer Society. - 9781479937660 ; , s. 349-354
  • Konferensbidrag (refereegranskat)abstract
    • Accurate understanding of workloads is key to efficient cloud resource management as well as to the design of large-scale applications. We analyze and model the workload of Wikipedia, one of the world's largest web sites. With descriptive statistics, time-series analysis, and polynomial splines, we study the trend and seasonality of the workload, its evolution over the years, and also investigate patterns in page popularity. Our results indicate that the workload is highly predictable with a strong seasonality. Our short term prediction algorithm is able to predict the workload with a Mean Absolute Percentage Error of around 2%.
  •  
2.
  • Mehta, Amardeep, et al. (författare)
  • Calvin Constrained : A Framework for IoT Applications in Heterogeneous Environments
  • 2017
  • Ingår i: 2017 IEEE 37TH International Conference on Distributed Computing Systems (ICDCS 2017). - : IEEE Computer Society. - 9781538617915 - 9781538617922 - 9781538617939 ; , s. 1063-1073
  • Konferensbidrag (refereegranskat)abstract
    • Calvin is an IoT framework for application development, deployment and execution in heterogeneous environments, that includes clouds, edge resources, and embedded or constrained resources. Inside Calvin, all the distributed resources are viewed as one environment by the application. The framework provides multi-tenancy and simplifies development of IoT applications, which are represented using a dataflow of application components (named actors) and their communication. The idea behind Calvin poses similarity with the serverless architecture and can be seen as Actor as a Service instead of Function as a Service. This makes Calvin very powerful as it does not only scale actors quickly but also provides an easy actor migration capability. In this work, we propose Calvin Constrained, an extension to the Calvin framework to cover resource-constrained devices. Due to limited memory and processing power of embedded devices, the constrained side of the framework can only support a limited subset of the Calvin features. The current implementation of Calvin Constrained supports actors implemented in C as well as Python, where the support for Python actors is enabled by using MicroPython as a statically allocated library, by this we enable the automatic management of state variables and enhance code re-usability. As would be expected, Python-coded actors demand more resources over C-coded ones. We show that the extra resources needed are manageable on current off-the-shelve micro-controller-equipped devices when using the Calvin framework.
  •  
3.
  • Mehta, Amardeep, 1985-, et al. (författare)
  • Distributed Cost-Optimized Placement for Latency-Critical Applications in Heterogeneous Environments
  • 2018
  • Ingår i: Proceedings of the IEEE 15th International Conference on Autonomic Computing (ICAC). - : IEEE Computer Society. - 9781538651391 ; , s. 121-130
  • Konferensbidrag (refereegranskat)abstract
    • Mobile Edge Clouds (MECs) with 5G will create new opportunities to develop latency-critical applications in domains such as intelligent transportation systems, process automation, and smart grids. However, it is not clear how one can costefficiently deploy and manage a large number of such applications given the heterogeneity of devices, application performance requirements, and workloads. This work explores cost and performance dynamics for IoT applications, and proposes distributed algorithms for automatic deployment of IoT applications in heterogeneous environments. Placement algorithms were evaluated with respect to metrics including number of required runtimes, applications’ slowdown, and the number of iterations used to place an application. Iterative search-based distributed algorithms such as Size Interval Actor Assignment in Groups (SIAA G) outperformed random and bin packing algorithms, and are therefore recommended for this purpose. Size Interval Actor Assignment in Groups at Least Utilized Runtime (SIAA G LUR) algorithm is also recommended when minimizing the number of iterations is important. The tradeoff of using SIAA G algorithms is a few extra runtimes compared to bin packing algorithms.
  •  
4.
  • Mehta, Amardeep, 1985-, et al. (författare)
  • How beneficial are intermediate layer Data Centers in Mobile Edge Networks?
  • 2016
  • Ingår i: Workshops on Fog and Mobile Edge Computing at Foundations and Applications of Self* Systems. - 9781509036516 ; , s. 222-229
  • Konferensbidrag (refereegranskat)abstract
    • To reduce the congestion due to the future bandwidth- hungry applications in domains such as Health care, Internet of Things (IoT), etc., we study the benefit of introducing additional Data Centers (DCs) closer to the network edge for the optimal application placement. Our study shows that the edge layer DCs in a Mobile Edge Network (MEN) infrastructure is cost beneficial for the bandwidth-hungry applications having their strong demand locality and in the scenarios where large capacity is deployed at the edge layer DCs. The cost savings for such applications can go up to 67%. Additional intermediate layer DCs close to the root DC can be marginally cost beneficial for the compute intensive applications with medium or low demand locality. Hence, a Telecom Network Operator should start building an edge DC first having capacity up to hundreds of servers at the network edge to cater the emerging bandwidth- hungry applications and to minimize its operational cost.
  •  
5.
  • Mehta, Amardeep, et al. (författare)
  • Online Spike Detection in Cloud Workloads
  • 2015
  • Ingår i: [Host publication title missing]. - New York : IEEE Computer Society. ; , s. 446-451, s. 446-451
  • Konferensbidrag (refereegranskat)abstract
    • We investigate methods for detection of rapid workload increases (load spikes) for cloud workloads. Such rapid and unexpected workload spikes are a main cause for poor performance or even crashing applications as the allocated cloud resources become insufficient. To detect the spikes early is fundamental to perform corrective management actions, like allocating additional resources, before the spikes become large enough to cause problems. For this, we propose a number of methods for early spike detection, based on established techniques from adaptive signal processing. A comparative evaluation shows, for example, to what extent the different methods manage to detect the spikes, how early the detection is made, and how frequently they falsely report spikes.
  •  
6.
  • Mehta, Amardeep, 1985- (författare)
  • Resource allocation for Mobile Edge Clouds
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Recent advances in Internet technologies have led to the proliferation of new distributed applications in the transportation, healthcare, mining, security, and entertainment sectors. The emerging applications have characteristics such as being bandwidth-hungry, latency-critical, and applications with a user population contained within a limited geographical area, and require high availability, low jitter, and security.One way of addressing the challenges arising because of these emerging applications, is to move the computing capabilities closer to the end-users, at the logical edge of a network, in order to improve the performance, operating cost, and reliability of applications and services. These distributed new resources and software stacks, situated on the path between today's centralized data centers and devices in close proximity to the last mile network, are known as Mobile Edge Clouds (MECs). The distributed MECs provides new opportunities for the management of compute resources and the allocation of applications to those resources in order to minimize the overall cost of application deployment while satisfying end-user demands in terms of application performance.However, these opportunities also present three significant challenges. The first challenge is where and how much computing resources to deploy along the path between today's centralized data centers and devices for cost-optimal operations. The second challenge is where and how much resources should be allocated to which applications to meet the applications' performance requirements while minimizing operational costs. The third challenge is how to provide a framework for application deployment on resource-constrained IoT devices in heterogeneous environments. This thesis addresses the above challenges by proposing several models, algorithms, and simulation and software frameworks. In the first part, we investigate methods for early detection of short-lived and significant increase in demand for computing resources (also called spikes) which may cause significant degradation in the performance of a distributed application. We make use of adaptive signal processing techniques for early detection of spikes. We then consider trade-offs between parameters such as the time taken to detect a spike and the number of false spikes that are detected. In the second part, we study the resource planning problem where we study the cost benefits of adding new compute resources based on performance requirements for emerging applications. In the third part, we study the problem of allocating resources to applications by formulating as an optimization problem, where the objective is to minimize overall operational cost while meeting the performance targets of applications. We also propose a hierarchical scheduling framework and policies for allocating resources to applications based on performance metrics of both applications and compute resources. In the last part, we propose a framework, Calvin Constrained, for resource-constrained devices, which is an extension of the Calvin framework and supports a limited but essential subset of the features of the reference framework taking into account the limited memory and processing power of the resource-constrained IoT devices.
  •  
7.
  • Mehta, Amardeep, 1985-, et al. (författare)
  • Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
  • 2018
  • Ingår i: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC). - Umeå : Umeå universitet. - 9781538668085 - 9781538668078 - 9781538668092
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoTs) applications in MECs. The scheduler at the higher level runs periodically – monitors system state and the performance of applications – and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application’s performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example.We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.
  •  
8.
  • Nguyen, Chanh Le Tan, 1985-, et al. (författare)
  • Why Cloud Applications Are not Ready for the Edge (yet)
  • 2019
  • Ingår i: Proceedings of the 4th ACM/IEEE Symposium on Edge Computing. - New York, NY, USA : IEEE. - 9781450367332 ; , s. 250-263
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Mobile Edge Clouds (MECs) are distributed platforms in which distant data-centers are complemented with computing and storage capacity located at the edge of the network. Their wide resource distribution enables MECs to fulfill the need of low latency and high bandwidth to offer an improved user experience.As modern cloud applications are increasingly architected as collections of small, independently deployable services, they can be flexibly deployed in various configurations that combines resources from both centralized datacenters and edge locations. In principle, such applications should therefore be well-placed to exploit the advantages of MECs so as to reduce service response times.In this paper, we quantify the benefits of deploying such cloud micro-service applications on MECs. Using two popular benchmarks, we show that, against conventional wisdom, end-to-end latency does not improve significantly even when most application services are deployed in the edge location. We developed a profiler to better understand this phenomenon, allowing us to develop recommendations for adapting applications to MECs. Further, by quantifying the gains of those recommendations, we show that the performance of an application can be made to reach the ideal scenario, in which the latency between an edge datacenter and a remote datacenter has no impact on the application performance.This work thus presents ways of adapting cloud-native applications to take advantage of MECs and provides guidance for developing MEC-native applications. We believe that both these elements are necessary to drive MEC adoption.
  •  
9.
  • Tärneberg, William, et al. (författare)
  • Distributed Approach to the Holistic Resource Management of a Mobile Cloud Network
  • 2017
  • Ingår i: International Conference of Fog and Edge Computing. - 9781509030477
  • Konferensbidrag (refereegranskat)abstract
    • The Mobile Cloud Network is an emerging cost and capacity heterogeneous distributed cloud topological paradigm that aims to remedy the application performance constraints imposed by centralised cloud infrastructures. A centralised cloud infrastructure and the adjoining Telecom network will struggle to accommodate the exploding amount of traffic generated by forthcoming highly interactive applications. Cost effectively managing a Mobile Cloud Network computing infrastructure while meeting individual application’s performance goals is non- trivial and is at the core of our contribution. Due to the scale of a Mobile Cloud Network, a centralised approach is infeasible. Therefore, in this paper a distributed algorithm that addresses these challenges is presented. The presented approach works towards meeting individual application’s performance objectives, constricting system-wide operational cost, and mitigating re- source usage skewness. The presented distributed algorithm does so by iteratively and independently acting on the objectives of each component with a common heuristic objective function. Sys- tematic evaluations reveal that the presented algorithm quickly converges and performs near optimal in terms of system-wide operational cost and application performance, and significantly outperforms similar na ̈ıve and random methods.
  •  
10.
  • Tärneberg, William, et al. (författare)
  • Dynamic application placement in the Mobile Cloud Network
  • 2017
  • Ingår i: Future generations computer systems. - : Elsevier BV. - 0167-739X .- 1872-7115. ; 70, s. 163-177
  • Tidskriftsartikel (refereegranskat)abstract
    • To meet the challenges of consistent performance, low communication latency, and a high degree of user mobility, cloud and Telecom infrastructure vendors and operators foresee a Mobile Cloud Network that incorporates public cloud infrastructures with cloud augmented Telecom nodes in forthcoming mobile access networks. A Mobile Cloud Network is composed of distributed cost- and capacityheterogeneous resources that host applications that in turn are subject to a spatially and quantitatively rapidly changing demand. Such an infrastructure requires a holistic management approach that ensures that the resident applications’ performance requirements are met while sustainably supported by the underlying infrastructure. The contribution of this paper is three-fold. Firstly, this paper contributes with a model that captures the cost- and capacity-heterogeneity of a Mobile Cloud Network infrastructure. The model bridges the Mobile Edge Computing and Distributed Cloud paradigms by modelling multiple tiers of resources across the network and serves not just mobile devices but any client beyond and within the network. A set of resource management challenges is presented based on this model. Secondly, an algorithm that holistically and optimally solves these challenges is proposed. The algorithm is formulated as an application placement method that incorporates aspects of network link capacity, desired user latency and user mobility, as well as data centre resource utilisation and server provisioning costs. Thirdly, to address scalability, a tractable locally optimal algorithm is presented. The evaluation demonstrates that the placement algorithm significantly improves latency, resource utilisation skewness while minimising the operational cost of the system. Additionally, the proposed model and evaluation method demonstrate the viability of dynamic resource management of the Mobile Cloud Network and the need for accommodating rapidly mobile demand in a holistic manner.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy