SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Brandic Ivona) "

Sökning: WFRF:(Brandic Ivona)

  • Resultat 1-16 av 16
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abraham, Erika, et al. (författare)
  • Preparing HPC Applications for Exascale : Challenges and Recommendations
  • 2015
  • Ingår i: Proceedings. - : IEEE conference proceedings. - 9781479999415 ; , s. 401-406
  • Konferensbidrag (refereegranskat)abstract
    • While the HPC community is working towards the development of the first Exaflop computer (expected around 2020), after reaching the Petaflop milestone in 2008 still only few HPC applications are able to fully exploit the capabilities of Petaflop systems. In this paper we argue that efforts for preparing HPC applications for Exascale should start before such systems become available. We identify challenges that need to be addressed and recommend solutions in key areas of interest, including formal modeling, static analysis and optimization, runtime analysis and optimization, and autonomic computing. Furthermore, we outline a conceptual framework for porting HPC applications to future Exascale computing systems and propose steps for its implementation.
  •  
2.
  • Ahmad, Sabtain, et al. (författare)
  • Sustainable environmental monitoring via energy and information efficient multi-node placement
  • 2023
  • Ingår i: IEEE Internet of Things Journal. - : IEEE. - 2327-4662. ; 10:24, s. 22065-22079
  • Tidskriftsartikel (refereegranskat)abstract
    • The Internet of Things is gaining traction for sensing and monitoring outdoor environments such as water bodies, forests, or agricultural lands. Sustainable deployment of sensors for environmental sampling is a challenging task because of the spatial and temporal variation of the environmental attributes to be monitored, the lack of the infrastructure to power the sensors for uninterrupted monitoring, and the large continuous target environment despite the sparse and limited sampling locations. In this paper, we present an environment monitoring framework that deploys a network of sensors and gateways connected through low-power, long-range networking to perform reliable data collection. The three objectives correspond to the optimization of information quality, communication capacity, and sustainability. Therefore, the proposed environment monitoring framework consists of three main components: (i) to maximize the information collected, we propose an optimal sensor placement method based on QR decomposition that deploys sensors at information- and communication-critical locations; (ii) to facilitate the transfer of big streaming data and alleviate the network bottleneck caused by low bandwidth, we develop a gateway configuration method with the aim to reduce the deployment and communication costs; and (iii) to allow sustainable environmental monitoring, an energy-aware optimization component is introduced. We validate our method by presenting a case study for monitoring the water quality of the Ergene River in Turkey. Detailed experiments subject to real-world data show that the proposed method is both accurate and efficient in monitoring a large environment and catching up with dynamic changes.
  •  
3.
  • Brandic, Ivona, et al. (författare)
  • An approach for the high-level specification of QoS-aware grid workflows considering location affinity
  • 2006
  • Ingår i: Scientific Programming. - : Hindawi Limited. - 1058-9244 .- 1875-919X. ; 14:3-4, s. 231-250
  • Tidskriftsartikel (refereegranskat)abstract
    • Many important scientific and engineering problems may be solved by combining multiple applications in the form of a Grid workflow. We consider that for the wide acceptance of Grid technology it is important that the user has the possibility to express requirements on Quality of Service (QoS) at workflow specification time. However, most of the existing workflow languages lack constructs for QoS specification. In this paper we present an approach for high level workflow specification that considers a comprehensive set of QoS requirements. Besides performance related QoS, it includes economical, legal and security aspects. For instance, for security or legal reasons the user may express the location affinity regarding Grid resources on which certain workflow tasks may be executed. Our QoS-aware workflow system provides support for the whole workflow life cycle from specification to execution. Workflow is specified graphically, in an intuitive manner, based on a standard visual modeling language. A set of QoS-aware service-oriented components is provided for workflow planning to support automatic constraint-based service negotiation and workflow optimization. For reducing the complexity of workflow planning, we introduce a QoS-aware workflow reduction technique. We illustrate our approach with a real-world workflow for maxillo facial surgery simulation.
  •  
4.
  •  
5.
  • Brandic, Ivona, et al. (författare)
  • Specification, Planning, and Execution of QoS-awareGrid Workflows within the Amadeus Environment
  • 2008
  • Ingår i: Concurrency and Computation. - : John Wiley & Sons. - 1532-0626 .- 1532-0634. ; 20:4, s. 331-345
  • Tidskriftsartikel (refereegranskat)abstract
    • Commonly, at a high level of abstraction Grid applications are specified based on the workflow paradigm. However, majority of Grid workflow systems either do not support Quality of Service (QoS), or provide only partial QoS support for certain phases of the workflow lifecycle. In this paper we present Amadeus, which is a holistic service-oriented environment for QoS-aware Grid workflows. Amadeus considers user requirements, in terms of QoS constraints, during workflow specification, planning, and execution. Within the Amadeus environment workflows and the associated QoS constraints are specified at a high level using an intuitive graphical notation. A distinguishing feature of our system is the support of a comprehensive set of QoS requirements, which considers in addition to performance and economical aspects also legal and security aspects. A set of QoS-aware service-oriented components is provided for workflow planning to support automatic constraint-based service negotiation and workflow optimization. For improving the efficiency of workflow planning we introduce a QoS-aware workflow reduction technique. Furthermore, we present our static and dynamic planning strategies for workflow execution in accordance with user-specified requirements. For each phase of the workflow lifecycle we experimentally evaluate the corresponding Amadeus components.
  •  
6.
  • Catalfamo, Alessio, et al. (författare)
  • Machine learning workflows in the computing continuum for environmental monitoring
  • 2024
  • Ingår i: Computational science – ICCS 2024. - Cham : Springer. - 9783031637742 - 9783031637759 ; , s. 368-382
  • Konferensbidrag (refereegranskat)abstract
    • Cloud-Edge Continuum is an innovative approach that exploits the strengths of the two paradigms: Cloud and Edge computing. This new approach gives us a holistic vision of this environment, enabling new kinds of applications that can exploit both the Edge computing advantages (e.g., real-time response, data security, and so on) and the powerful Cloud computing infrastructure for high computational requirements. This paper proposes a Cloud-Edge computing Workflow solution for Machine Learning (ML) inference in a hydrogeological use case. Our solution is designed in a Cloud-Edge Continuum environment thanks to Pegasus Workflow Management System Tools that we use for the implementation phase. The proposed work splits the inference tasks, transparently distributing the computation performed by each layer between Cloud and Edge infrastructure. We use two models to implement a proof-of-concept of the proposed solution.
  •  
7.
  • Faragardi, Hamid Reza, 1987- (författare)
  • Optimizing Timing-Critical Cloud Resources in a Smart Factory
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis addresses the topic of resource efficiency in the context of timing critical components that are used in the realization of a Smart Factory.The concept of the smart factory is a recent paradigm to build future production systems in a way that is both smarter and more flexible. When it comes to realization of a smart factory, three principal elements play a significant role, namely Embedded Systems, Internet of Things (IoT) and Cloud Computing. In a smart factory, efficient use of computing and communication resources is a prerequisite not only to obtain a desirable performance for running industrial applications, but also to minimize the deployment cost of the system in terms of the size and number of resources that are required to run industrial applications with an acceptable level of performance. Most industrial applications that are involved in smart factories, e.g., automation and manufacturing applications, are subject to a set of strict timing constraints that must be met for the applications to operate properly. Such applications, including underlying hardware and software components that are used to run the application, constitute a real-time system. In real-time systems, the first and major concern of the system designer is to provide a solution where all timing constraints are met. To do so we need a time-predictable IoT/Cloud Computing framework to deal with the real-time constraints that are inherent in industrial applications running in a smart factory. Afterwards, with respect to the time predictable framework, the number of required computing and communication resources can and should be optimized such that the deployed system is cost efficient. In this thesis, to investigate and present solutions that provide and improve the resource efficiency of computing and communication resources in a smart factory, we conduct research following three themes: (i) multi-core embedded processors, which are the key element in terms of computing components embedded in the machinery of a smart factory, (ii) cloud computing data centers, as the supplier of a massive data storage and a large computational power, and(iii) IoT, for providing the interconnection of computing components embedded in the objects of a smart factory. Each of these themes are targeted separately to optimize resource efficiency. For each theme, we identify key challenges when it comes to achieving a resource-efficient design of the system. We then formulate the problem and propose solutions to optimize the resource efficiency of the system, while satisfying all timing constraints reflected in the model. We then propose a comprehensive resource allocation mechanism to optimize the resource efficiency in the whole system while considering the characteristics of each of these research themes. The experimental results indicate a clear improvement when it comes to timing-critical IoT / Cloud Computing resources in a smart factory. At the level of multi-core embedded devices, the total CPU usage of a quad-core processor is shown to be improved by 11.2%. At the level of Cloud Computing, the number of cloud servers that are required to execute a given set of real-time applications is shown to be reduced by 25.5%. In terms of network components that are used to collect sensor data, our proposed approach reduces the total deployment cost of thesystem by 24%. In summary these results all contribute towards the realization of a future smart factory.
  •  
8.
  • Farokhi, Soodeh, et al. (författare)
  • A hybrid cloud controller for vertical memory elasticity : a control-theoretic approach
  • 2016
  • Ingår i: Future generations computer systems. - : Elsevier. - 0167-739X .- 1872-7115. ; 65, s. 57-72
  • Tidskriftsartikel (refereegranskat)abstract
    • Web-facing applications are expected to provide certain performance guarantees despite dynamic and continuous workload changes. As a result, application owners are using cloud computing as it offers the ability to dynamically provision computing resources (e.g., memory, CPU) in response to changes in workload demands to meet performance targets and eliminates upfront costs. Horizontal, vertical, and the combination of the two are the possible dimensions that cloud application can be scaled in terms of the allocated resources. In vertical elasticity as the focus of this work, the size of virtual machines (VMs) can be adjusted in terms of allocated computing resources according to the runtime workload. A commonly used vertical resource elasticity approach is realized by deciding based on resource utilization, named capacity-based. While a new trend is to use the application performance as a decision making criterion, and such an approach is named performance-based. This paper discusses these two approaches and proposes a novel hybrid elasticity approach that takes into account both the application performance and the resource utilization to leverage the benefits of both approaches. The proposed approach is used in realizing vertical elasticity of memory (named as vertical memory elasticity), where the allocated memory of the VM is auto-scaled at runtime. To this aim, we use control theory to synthesize a feedback controller that meets the application performance constraints by auto-scaling the allocated memory, i.e., applying vertical memory elasticity. Different from the existing vertical resource elasticity approaches, the novelty of our work lies in utilizing both the memory utilization and application response time as decision making criteria. To verify the resource efficiency and the ability of the controller in handling unexpected workloads, we have implemented the controller on top of the Xen hypervisor and performed a series of experiments using the RUBBoS interactive benchmark application, under synthetic and real workloads including Wikipedia and FIFA. The results reveal that the hybrid controller meets the application performance target with better performance stability (i.e., lower standard deviation of response time), while achieving a high memory utilization (close to 83%), and allocating less memory compared to all other baseline controllers.
  •  
9.
  • Farokhi, Soodeh, et al. (författare)
  • Coordinating CPU and Memory Elasticity Controllers to Meet Service Response Time Constraints
  • 2015
  • Ingår i: 2015 INTERNATIONAL CONFERENCE ON CLOUD AND AUTONOMIC COMPUTING (ICCAC). - 9781467395663 ; , s. 69-80
  • Konferensbidrag (refereegranskat)abstract
    • Vertical elasticity is recognized as a key enabler for efficient resource utilization of cloud infrastructure through fine-grained resource provisioning, e.g., allowing CPU cycles to be leased for as short as a few seconds. However, little research has been done to support vertical elasticity where the focus is mostly on a single resource, either CPU or memory, while an application may need arbitrary combinations of these resources at different stages of its execution. Nonetheless, the existing techniques cannot be readily used as-is without proper orchestration since they may lead to either under-or over-provisioning of resources and consequently result in undesirable behaviors such as performance disparity. The contribution of this paper is the design of an autonomic resource controller using a fuzzy control approach as a coordination technique. The novel controller dynamically adjusts the right amount of CPU and memory required to meet the performance objective of an application, namely its response time. We perform a thorough experimental evaluation using three different interactive benchmark applications, RUBiS, RUBBoS, and Olio, under workload traces generated based on open and closed system models. The results show that the coordination of memory and CPU elasticity controllers using the proposed fuzzy control provisions the right amount of resources to meet the response time target without over-committing any of the resource types. In contrast, with no coordinating between controllers, the behaviour of the system is unpredictable e.g., the application performance may be met but at the expense of over-provisioning of one of the resources, or application crashing due to severe resource shortage as a result of conflicting decisions.
  •  
10.
  • Luger, Daniel, et al. (författare)
  • Cost-aware neural network splitting and dynamic rescheduling for edge intelligence
  • 2023
  • Ingår i: EdgeSys '23. - : ACM Digital Library. - 9798400700828 ; , s. 42-47
  • Konferensbidrag (refereegranskat)abstract
    • With the rise of IoT devices and the necessity of intelligent applications, inference tasks are often offloaded to the cloud due to the computation limitation of the end devices. Yet, requests to the cloud are costly in terms of latency, and therefore a shift of the computation from the cloud to the network's edge is unavoidable. This shift is called edge intelligence and promises lower latency, among other advantages. However, some algorithms, like deep neural networks, are computationally intensive, even for local edge servers (ES). To keep latency low, such DNNs can be split into two parts and distributed between the ES and the cloud. We present a dynamic scheduling algorithm that takes real-Time parameters like the clock speed of the ES, bandwidth, and latency into account and predicts the optimal splitting point regarding latency. Furthermore, we estimate the overall costs for the ES and cloud during run-Time and integrate them into our prediction and decision models. We present a cost-Aware prediction of the splitting point, which can be tuned with a parameter toward faster response or lower costs.
  •  
11.
  • Memeti, Suejb, et al. (författare)
  • A Review of Machine Learning and Meta-heuristic Methods for Scheduling Parallel Computing Systems
  • 2018
  • Ingår i: Proceedings of the International Conference on Learning and Optimization Algorithms: Theory and Applications LOPAL 2018. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450353045
  • Konferensbidrag (refereegranskat)abstract
    • Optimized software execution on parallel computing systems demands consideration of many parameters at run-time. Determining the optimal set of parameters in a given execution context is a complex task, and therefore to address this issue researchers have proposed different approaches that use heuristic search or machine learning. In this paper, we undertake a systematic literature review to aggregate, analyze and classify the existing software optimization methods for parallel computing systems. We review approaches that use machine learning or meta-heuristics for scheduling parallel computing systems. Additionally, we discuss challenges and future research directions. The results of this study may help to better understand the state-of-the-art techniques that use machine learning and meta-heuristics to deal with the complexity of scheduling parallel computing systems. Furthermore, it may aid in understanding the limitations of existing approaches and identification of areas for improvement.
  •  
12.
  • Memeti, Suejb, et al. (författare)
  • Using meta-heuristics and machine learning for software optimization of parallel computing systems : a systematic literature review
  • 2019
  • Ingår i: Computing. - : Springer. - 0010-485X .- 1436-5057. ; 101:8, s. 893-936
  • Tidskriftsartikel (refereegranskat)abstract
    • While modern parallel computing systems offer high performance, utilizing these powerful computing resources to the highest possible extent demands advanced knowledge of various hardware architectures and parallel programming models. Furthermore, optimized software execution on parallel computing systems demands consideration of many parameters at compile-time and run-time. Determining the optimal set of parameters in a given execution context is a complex task, and therefore to address this issue researchers have proposed different approaches that use heuristic search or machine learning. In this paper, we undertake a systematic literature review to aggregate, analyze and classify the existing software optimization methods for parallel computing systems. We review approaches that use machine learning or meta-heuristics for software optimization at compile-time and run-time. Additionally, we discuss challenges and future research directions. The results of this study may help to better understand the state-of-the-art techniques that use machine learning and meta-heuristics to deal with the complexity of software optimization for parallel computing systems. Furthermore, it may aid in understanding the limitations of existing approaches and identification of areas for improvement.
  •  
13.
  • Pllana, Sabri, et al. (författare)
  • A Survey of the State of the Art in Performance Modelingand Prediction of Parallel and Distributed Computing Systems
  • 2008
  • Ingår i: International Journal of Computational Intelligence Research (IJCIR). - 0973-1873. ; 4:1, s. 17-26
  • Tidskriftsartikel (refereegranskat)abstract
    • Performance is one of the key features of parallel and distributed computing systems. Therefore, in the past a significant research effort was invested in the development of approaches for performance modeling and prediction of parallel and distributed computing systems. In this paper we identify the trends, contributions, and drawbacks of the state of the art approaches. We describe a wide range of the performance modeling approaches that spans from the high-level mathematical modeling to the detailed instruction-level simulation. For each approach we describe how the program and machine are modeled and estimate the model development and evaluation effort, the efficiency, and the accuracy. Furthermore, we present an overall evaluation of the described approaches.
  •  
14.
  • Struhar, Vaclav (författare)
  • Improving Soft Real-time Performance of Fog Computing
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.   In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
  •  
15.
  • Toczé, Klervie, 1992-, et al. (författare)
  • Edge Workload Trace Gathering and Analysis for Benchmarking
  • 2022
  • Ingår i: 6TH IEEE INTERNATIONAL CONFERENCE ON FOG AND EDGE COMPUTING (ICFEC 2022). - 9781665495240 - 9781665495257 ; , s. 34-41
  • Konferensbidrag (refereegranskat)abstract
    • The emerging field of edge computing is suffering from a lack of representative data to evaluate rapidly introduced new algorithms or techniques. That is a critical issue as this complex paradigm has numerous different use cases which translate into a highly diverse set of workload types.In this work, within the context of the edge computing activity of SPEC RG Cloud, we continue working towards an edge benchmark by defining high-level workload classes as well as collecting and analyzing traces for three real-world edge applications, which, according to the existing literature, are the representatives of those classes. Moreover, we propose a practical and generic methodology for workload definition and gathering. The traces and gathering tool are provided open-source.In the analysis of the collected workloads, we detect discrepancies between the literature and the traces obtained, thus highlighting the need for a continuing effort into gathering and providing data from real applications, which can be done using the proposed trace gathering methodology. Additionally, we discuss various insights and future directions that rise to the surface through our analysis.
  •  
16.
  • Toczé, Klervie, 1992-, et al. (författare)
  • Towards Edge Benchmarking : A Methodology for Characterizing Edge Workloads
  • 2019
  • Ingår i: Proceedings of 4th International Workshop on Foundations and Applications of Self* Systems (FAS*W). - : IEEE. - 9781728124063 ; , s. 70-71
  • Konferensbidrag (refereegranskat)abstract
    • The edge computing paradigm has recently attracted research efforts coming from different application domains. However, evaluating an edge platform or algorithm is impeded by the lack of suitable benchmarks. We propose a methodology for characterizing edge workloads from different application domains. It is a first step towards defining workloads to be included in a future edge benchmarking suite. We evaluate the methodology on three use cases and find that defining a common and standard set of workloads is plausible.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-16 av 16

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy