SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Rodriguez Navas Guillermo) srt2:(2020-2024)"

Search: WFRF:(Rodriguez Navas Guillermo) > (2020-2024)

  • Result 1-9 of 9
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (author)
  • A preliminary roadmap for dependability research in fog computing
  • 2020
  • In: ACM SIGBED Review. - : Association for Computing Machinery. - 1551-3688. ; 16:4, s. 14-19
  • Journal article (peer-reviewed)abstract
    • Fog computing aims to support novel real-time applications by extending cloud resources to the network edge. This technology is highly heterogeneous and comprises a wide variety of devices interconnected through the so-called fog layer. Compared to traditional cloud infrastructure, fog presents more varied reliability challenges, due to its constrained resources and mobility of nodes. This paper summarizes current research efforts on fault tolerance and dependability in fog computing and identifies less investigated open problems, which constitute interesting research directions to make fogs more dependable. 
  •  
2.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (author)
  • Evaluation of Storage Placement in Computing Continuum for a Robotic Application : A Simulation-Based Performance Analysis
  • 2024
  • In: Journal of Grid Computing. - : Springer Science+Business Media B.V.. - 1570-7873 .- 1572-9184. ; 22:2
  • Journal article (peer-reviewed)abstract
    • This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.
  •  
3.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (author)
  • Fault-tolerant Permanent Storage for Container-based Fog Architectures
  • 2021
  • In: Proceedings of the 2021 22nd IEEE International Conference on Industrial Technology (ICIT). ; , s. 722-729
  • Conference paper (peer-reviewed)abstract
    • Container-based architectures are widely used for cloud computing and can have an important role in the implementation of fog computing infrastructures. However, there are some crucial dependability aspects that must be addressed to make containerization suitable for critical fog applications, e.g., in automation and robotics. This paper discusses challenges in applying containerization at the fog layer and focuses on one of those challenges: provision of fault-tolerant permanent storage. The paper also presents a container-based fog architecture utilizing so-called storage containers, which combine built-in fault-tolerance mechanisms of containers with a distributed consensus protocol to achieve data consistency.
  •  
4.
  • Bakhshi Valojerdi, Zeinab, 1986- (author)
  • Persistent Fault-Tolerant Storage at the Fog Layer
  • 2021
  • Licentiate thesis (other academic/artistic)abstract
    • Clouds are powerful computer centers that provide computing and storage facilities that can be remotely accessed. The flexibility and cost-efficiency offered by clouds have made them very popular for business and web applications. The use of clouds is now being extended to safety-critical applications such as factories. However, cloud services do not provide time predictability which creates a hassle for such time-sensitive applications. Moreover, delays in the data communication between clouds and the devices the clouds control are unpredictable. Therefore, to increase predictability an intermediate layer between devices and the cloud is introduced. This layer, the Fog layer, aims to provide computational resources closer to the edge of the network. However, the fog computing paradigm relies on resource-constrained nodes, creating new potential challenges in resource management, scalability, and reliability. Solutions such as lightweight virtualization technologies can be leveraged for solving the dichotomy between performance and reliability in fog computing. In this context, container-based virtualization is a key technology providing lightweight virtualization for cloud computing that can be applied in fog computing as well. Such container-based technologies provide fault tolerance mechanisms that improve the reliability and availability of application execution.  By the study of a robotic use-case, we have realized that persistent data storage for stateful applications at the fog layer is particularly important. In addition, we identified the need to enhance the current container orchestration solution to fit fog applications executing in container-based architectures. In this thesis, we identify open challenges in achieving dependable fog platforms. Among these, we focus particularly on scalable, lightweight virtualization, auto-recovery, and re-integration solutions after failures in fog applications and nodes. We implement a testbed to deploy our use-case on a container-based fog platform and investigate the fulfillment of key dependability requirements. We enhance the architecture and identify the lack of persistent storage for stateful applications as an important impediment for the execution of control applications. We propose a solution for persistent fault-tolerant storage at the fog layer, which dissociates storage from applications to reduce application load and separates the concern of distributed storage. Our solution includes a replicated data structure supported by a consensus protocol that ensures distributed data consistency and fault tolerance in case of node failures. Finally, we use the UPPAAL verification tool to model and verify the fault tolerance and consistency of our solution.
  •  
5.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (author)
  • Using UPPAAL to Verify Recovery in a Fault-tolerant Mechanism Providing Persistent State at the Edge
  • 2021
  • In: 26th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2021. - Västerås : Institute of Electrical and Electronics Engineers (IEEE). - 9781728129891
  • Conference paper (peer-reviewed)abstract
    • In our previous work we proposed a fault-tolerant persistent storage for container-based fog architecture. We leveraged the use of containerization to provide storage as a containerized application working along with other containers. As a fault-tolerance mechanism we introduced a replicated data structure and to solve consistency issue between the replicas distributed in the cluster of nodes, we used the RAFT consensus protocol. In this paper, we verify our proposed solution using the UPPAAL model checker. We explain how our solution is modeled in UPPAAL and present a formal verification of key properties related to persistent storage and data consistency between nodes.
  •  
6.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (author)
  • Verifying the timing of a persistent storage for stateful fog applications
  • 2022
  • In: 6th International Conference on Computer, Software and Modeling (ICCSM). - : Institute of Electrical and Electronics Engineers Inc.. ; , s. 1-8
  • Conference paper (peer-reviewed)abstract
    • In this paper, we analyze the failure semantics of a persistent fault-tolerant storage solution for stateful fog applications. This storage system is a container-based solution that provides data availability and consistency in a distributed container-based fog architecture. We evaluate the behavior of this storage system with a formal model that includes all the important time parameters and temporal aspects of the solution. This allows us to verify data consistency and other fault-tolerance properties of our system model while considering application startup latency, together with synchronization intervals and delays. We prove that the solution can tolerate failures at application, node, communication and storage level with the ability to automatically recover from failures and provides data consistency within the synchronization delay defined as t time units, which we can calculate for a given system configuration.
  •  
7.
  • Bakhshi, Zeinab, 1986-, et al. (author)
  • Analyzing the performance of persistent storage for fault-tolerant stateful fog applications
  • 2023
  • In: Journal of systems architecture. - 1383-7621 .- 1873-6165. ; 144
  • Journal article (peer-reviewed)abstract
    • In this paper, we analyze the scalability and performance of a persistent, fault-tolerant storage approach that provides data availability and consistency in a distributed container-based architecture with intended use in industrial control applications. We use simulation to evaluate the performance of this storage system in terms of scalability and failures. As the industrial applications considered have timing constraints, the simulation results show that for certain failure patterns, it is possible to determine whether the storage solution can meet critical deadlines. The presented approach is applicable for evaluating timing constraints also of other container-based critical applications that require persistent storage.
  •  
8.
  • Bakhshi, Zeinab, 1986- (author)
  • Lightweight Persistent Storage for Industrial Applications
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • Clouds are large computer centers that offer remote access to computing and storage resources, making them popular for business and web applications. They are now being considered for use in safety-critical applications such as factories, but lack sufficient time predictability, which makes it challenging to use them in these time-sensitive applications. To overcome this limitation, an intermediate layer, the fog layer, is introduced to provide computational resources closer to the network edge. However, this new computing paradigm faces its own challenges in resource management, scalability, and reliability due to resource constrained nodes. Lightweight virtualization technologies like containerization can solve the performance-reliability dichotomy in fog computing and provide built-in fault tolerance mechanisms. By studying a robotic use-case, we realized the critical importance of persistent data storage for stateful applications, such as many control applications. However, container-based solutions lack fault-tolerant persistent storage. In this thesis, we identify new challenges associated with leveraging container-based architectures, particularly the importance of persistent storage for stateful applications. We investigate the design possibilities for persistent fault-tolerant storage and propose a solution adapted to container-based fog architectures and tailored for stateful applications. The solution provides scalability, auto recovery, and re-integration after failures at application and node levels. Key elements are a replicated data structure and a storage container, using a consensus protocol for distributed data consistency and fault tolerance in case of node failures. The fault tolerance and consistency of the solution are modeled and verified, and its timing requirements evaluated. We use simulation to evaluate the timing performance of our solution in larger set-ups. The results of our study show that although adding a consistency protocol introduces a timing overhead, the solution still meets timing requirements for the studied use-case even in presence of a set of relevant faults. By leveraging a four-dimensional approach, we also conduct a comparative analysis of our solution with other approaches from various perspectives, indicating that our solution can be applied in a broader context than initially intended.
  •  
9.
  • Pozo Pérez, Francisco Manuel, et al. (author)
  • Self-Healing Protocol: Repairing Scheduels Online after Link Failures in Time-Triggered Networks
  • 2021
  • In: 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2021. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665435727
  • Conference paper (peer-reviewed)abstract
    • Switched networks following the time-triggered paradigm rely on static schedules that determine the communication pattern over each link. In order to tolerate link failures, methods based on spatial redundancy and based on resynthesis and replacement of schedules have been proposed. These methods, however, do not scale to larger networks, which may be needed e.g. for future large-scale cyberphysical systems. We propose a distributed Self-Healing Protocol (SHP) that, instead of recomputing the whole schedule, repairs the existent schedule at runtime. For that, it relies on the coordination among the nodes of the network to redefine the repair problem as a number of local synthesis problems of significantly smaller size, which are solved in parallel by the nodes that need to reroute the frames affected by link failures. SHP exhibits a high success rate compared to full rescheduling, as well as remarkable scalability; it repairs the schedule in milliseconds, whereas rescheduling may require minutes for large networks.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-9 of 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view