SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Prodan Radu) srt2:(2020-2024)"

Sökning: WFRF:(Prodan Radu) > (2020-2024)

  • Resultat 1-12 av 12
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (författare)
  • Evaluation of Storage Placement in Computing Continuum for a Robotic Application : A Simulation-Based Performance Analysis
  • 2024
  • Ingår i: Journal of Grid Computing. - : Springer Science+Business Media B.V.. - 1570-7873 .- 1572-9184. ; 22:2
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.
  •  
2.
  • Bakhshi Valojerdi, Zeinab, 1986- (författare)
  • Persistent Fault-Tolerant Storage at the Fog Layer
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Clouds are powerful computer centers that provide computing and storage facilities that can be remotely accessed. The flexibility and cost-efficiency offered by clouds have made them very popular for business and web applications. The use of clouds is now being extended to safety-critical applications such as factories. However, cloud services do not provide time predictability which creates a hassle for such time-sensitive applications. Moreover, delays in the data communication between clouds and the devices the clouds control are unpredictable. Therefore, to increase predictability an intermediate layer between devices and the cloud is introduced. This layer, the Fog layer, aims to provide computational resources closer to the edge of the network. However, the fog computing paradigm relies on resource-constrained nodes, creating new potential challenges in resource management, scalability, and reliability. Solutions such as lightweight virtualization technologies can be leveraged for solving the dichotomy between performance and reliability in fog computing. In this context, container-based virtualization is a key technology providing lightweight virtualization for cloud computing that can be applied in fog computing as well. Such container-based technologies provide fault tolerance mechanisms that improve the reliability and availability of application execution.  By the study of a robotic use-case, we have realized that persistent data storage for stateful applications at the fog layer is particularly important. In addition, we identified the need to enhance the current container orchestration solution to fit fog applications executing in container-based architectures. In this thesis, we identify open challenges in achieving dependable fog platforms. Among these, we focus particularly on scalable, lightweight virtualization, auto-recovery, and re-integration solutions after failures in fog applications and nodes. We implement a testbed to deploy our use-case on a container-based fog platform and investigate the fulfillment of key dependability requirements. We enhance the architecture and identify the lack of persistent storage for stateful applications as an important impediment for the execution of control applications. We propose a solution for persistent fault-tolerant storage at the fog layer, which dissociates storage from applications to reduce application load and separates the concern of distributed storage. Our solution includes a replicated data structure supported by a consensus protocol that ensures distributed data consistency and fault tolerance in case of node failures. Finally, we use the UPPAAL verification tool to model and verify the fault tolerance and consistency of our solution.
  •  
3.
  • Khan, Akif Quddus, et al. (författare)
  • Cloud storage cost: a taxonomy and survey
  • 2024
  • Ingår i: World wide web (Bussum). - : Springer. - 1386-145X .- 1573-1413. ; 27:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Cloud service providers offer application providers with virtually infinite storage and computing resources, while providing cost-efficiency and various other quality of service (QoS) properties through a storage-as-a-service (StaaS) approach. Organizations also use multi-cloud or hybrid solutions by combining multiple public and/or private cloud service providers to avoid vendor lock-in, achieve high availability and performance, and optimise cost. Indeed cost is one of the important factors for organizations while adopting cloud storage; however, cloud storage providers offer complex pricing policies, including the actual storage cost and the cost related to additional services (e.g., network usage cost). In this article, we provide a detailed taxonomy of cloud storage cost and a taxonomy of other QoS elements, such as network performance, availability, and reliability. We also discuss various cost trade-offs, including storage and computation, storage and cache, and storage and network. Finally, we provide a cost comparison across different storage providers under different contexts and a set of user scenarios to demonstrate the complexity of cost structure and discuss existing literature for cloud storage selection and cost optimization. We aim that the work presented in this article will provide decision-makers and researchers focusing on cloud storage selection for data placement, cost modelling, and cost optimization with a better understanding and insights regarding the elements contributing to the storage cost and this complex problem domain.
  •  
4.
  • Khan, Akif Quddus, et al. (författare)
  • Smart Data Placement for Big Data Pipelines : An Approach based on the Storage-as-a-Service Model
  • 2022
  • Ingår i: 2022 IEEE/ACM 15TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING, UCC. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 317-320
  • Konferensbidrag (refereegranskat)abstract
    • The development of big data pipelines is a challenging task, especially when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., Storageas-a-Service (StaaS), instead of local storage has the potential of providing more flexibility in terms of such as scalability, fault tolerance, and availability. In this paper, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, the impact of server-side encryption, and user weights. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance and the feasibility of dynamic selection of a storage option based on four primary user scenarios.
  •  
5.
  • Khan, Akif Quddus, et al. (författare)
  • Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
  • 2023
  • Ingår i: Sensors. - : MDPI AG. - 1424-8220. ; 23:2
  • Tidskriftsartikel (refereegranskat)abstract
    • Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.
  •  
6.
  • Khan, Akif Quddus, et al. (författare)
  • Towards Cloud Storage Tier Optimization with Rule-Based Classification
  • 2023
  • Ingår i: Service-Oriented and Cloud Computing. - : Springer Nature. ; , s. 205-216
  • Konferensbidrag (refereegranskat)abstract
    • Cloud storage adoption has increased over the years as more and more data has been produced with particularly high demand for fast processing and low latency. To meet the users’ demands and to provide a cost-effective solution, cloud service providers (CSPs) have offered tiered storage; however, keeping the data in one tier is not a cost-effective approach. Hence, several two-tiered approaches have been developed to classify storage objects into the most suitable tier. In this respect, this paper explores a rule-based classification approach to optimize cloud storage cost by migrating data between different storage tiers. Instead of two, four distinct storage tiers are considered, including premium, hot, cold, and archive. The viability and potential of the approach are demonstrated by comparing cost savings achieved when data was moved between tiers versus when it remained static. The results indicate that the proposed approach has the potential to significantly reduce cloud storage cost, thereby providing valuable insights for organizations seeking to optimize their cloud storage strategies. Finally, the limitations of the proposed approach are discussed along with the potential directions for future work, particularly the use of game theory to incorporate a feedback loop to extend and improve the proposed approach accordingly.
  •  
7.
  • Khan, Akif Quddus, et al. (författare)
  • Towards Graph-based Cloud Cost Modelling and Optimisation
  • 2023
  • Ingår i: Proceedings. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1337-1342
  • Konferensbidrag (refereegranskat)abstract
    • Cloud computing has become an increasingly popular choice for businesses and individuals due to its flexibility, scalability, and convenience; however, the rising cost of cloud resources has become a significant concern for many. The pay-per-use model used in cloud computing means that costs can accumulate quickly, and the lack of visibility and control can result in unexpected expenses. The cost structure becomes even more complicated when dealing with hybrid or multi-cloud environments. For businesses, the cost of cloud computing can be a significant portion of their IT budget, and any savings can lead to better financial stability and competitiveness. In this respect, it is essential to manage cloud costs effectively. This requires a deep understanding of current resource utilization, forecasting future needs, and optimising resource utilization to control costs. To address this challenge, new tools and techniques are being developed to provide more visibility and control over cloud computing costs. In this respect, this paper explores a graph-based solution for modelling cost elements and cloud resources and potential ways to solve the resulting constraint problem of cost optimisation. We primarily consider utilization, cost, performance, and availability in this context. Such an approach will eventually help organizations make informed decisions about cloud resource placement and manage the costs of software applications and data workflows deployed in single, hybrid, or multi-cloud environments.
  •  
8.
  • Kimovski, Dragi, et al. (författare)
  • Beyond von neumann in the computing continuum : architectures, applications, and future directions
  • 2023
  • Ingår i: IEEE Internet Computing. - : IEEE COMPUTER SOC. - 1089-7801 .- 1941-0131. ; , s. 1-11
  • Tidskriftsartikel (refereegranskat)abstract
    • The article discusses the emerging non-von Neumann computer architectures and their integration in the computing continuum for supporting modern distributed applications, including artificial intelligence, big data, and scientific computing. It provides a detailed summary of the available and emerging non-von Neumann architectures, which range from power-efficient single-board accelerators to quantum and neuromorphic computers. Furthermore, it explores their potential benefits for revolutionizing data processing and analysis in various societal, science, and industry fields. The paper provides a detailed analysis of the most widely used class of distributed applications and discusses the difficulties in their execution over the computing continuum, including communication, interoperability, orchestration, and sustainability issues.
  •  
9.
  • Nikolov, Nikolay, et al. (författare)
  • Container-Based Data Pipelines on the Computing Continuum for Remote Patient Monitoring
  • 2023
  • Ingår i: Computer. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9162 .- 1558-0814. ; 56:10, s. 40-48
  • Tidskriftsartikel (refereegranskat)abstract
    • The emerging concept of big data pipelines provides relevant solutions and is one of the main enablers of telemedicine. We present a data pipeline for remote patient monitoring and show a real-world example of how data pipelines help address the stringent requirements of telemedicine.
  •  
10.
  • Roman, Dumitru, et al. (författare)
  • Big Data Pipelines on the Computing Continuum : Tapping the Dark Data
  • 2022
  • Ingår i: Computer. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9162 .- 1558-0814. ; 55:11, s. 74-84
  • Tidskriftsartikel (refereegranskat)abstract
    • The computing continuum enables new opportunities for managing big data pipelines concerning efficient management of heterogeneous and untrustworthy resources. We discuss the big data pipelines lifecycle on the computing continuum and its associated challenges, and we outline a future research agenda in this area.
  •  
11.
  •  
12.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-12 av 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy