SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Punnekkat Sasikumar Professor) "

Sökning: WFRF:(Punnekkat Sasikumar Professor)

  • Resultat 1-10 av 13
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Aysan, Hüseyin, 1982- (författare)
  • Fault-Tolerance Strategies and Probabilistic Guarantees for Real-Time Systems
  • 2012
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Ubiquitous deployment of embedded systems is having a substantial impact on our society, since they interact with our lives in many critical real-time applications. Typically, embedded systems used in safety or mission critical applications (e.g., aerospace, avionics, automotive or nuclear domains) work in harsh environments where they are exposed to frequent transient faults such as power supply jitter, network noise and radiation. They are also susceptible to errors originating from design and production faults. Hence, they have the design objective to maintain the properties of timeliness and functional correctness even under error occurrences. Fault-tolerance plays a crucial role towards achieving dependability, and the fundamental requirement for the design of effective and efficient fault-tolerance mechanisms is a realistic and applicable model of potential faults and their manifestations. An important factor to be considered in this context is the random nature of faults and errors, which, if addressed in the timing analysis by assuming a rigid worst-case occurrence scenario, may lead to inaccurate results. It is also important that the power, weight, space and cost constraints of embedded systems are addressed by efficiently using the available resources for fault-tolerance. This thesis presents a framework for designing predictably dependable embedded real-time systems by jointly addressing the timeliness and the reliability properties. It proposes a spectrum of fault-tolerance strategies particularly targeting embedded real-time systems. Efficient resource usage is attained by considering the diverse criticality levels of the systems' building blocks. The fault-tolerance strategies are complemented with the proposed probabilistic schedulability analysis techniques, which are based on a comprehensive stochastic fault and error model.
  •  
2.
  • Bakhshi Valojerdi, Zeinab, 1986- (författare)
  • Persistent Fault-Tolerant Storage at the Fog Layer
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Clouds are powerful computer centers that provide computing and storage facilities that can be remotely accessed. The flexibility and cost-efficiency offered by clouds have made them very popular for business and web applications. The use of clouds is now being extended to safety-critical applications such as factories. However, cloud services do not provide time predictability which creates a hassle for such time-sensitive applications. Moreover, delays in the data communication between clouds and the devices the clouds control are unpredictable. Therefore, to increase predictability an intermediate layer between devices and the cloud is introduced. This layer, the Fog layer, aims to provide computational resources closer to the edge of the network. However, the fog computing paradigm relies on resource-constrained nodes, creating new potential challenges in resource management, scalability, and reliability. Solutions such as lightweight virtualization technologies can be leveraged for solving the dichotomy between performance and reliability in fog computing. In this context, container-based virtualization is a key technology providing lightweight virtualization for cloud computing that can be applied in fog computing as well. Such container-based technologies provide fault tolerance mechanisms that improve the reliability and availability of application execution.  By the study of a robotic use-case, we have realized that persistent data storage for stateful applications at the fog layer is particularly important. In addition, we identified the need to enhance the current container orchestration solution to fit fog applications executing in container-based architectures. In this thesis, we identify open challenges in achieving dependable fog platforms. Among these, we focus particularly on scalable, lightweight virtualization, auto-recovery, and re-integration solutions after failures in fog applications and nodes. We implement a testbed to deploy our use-case on a container-based fog platform and investigate the fulfillment of key dependability requirements. We enhance the architecture and identify the lack of persistent storage for stateful applications as an important impediment for the execution of control applications. We propose a solution for persistent fault-tolerant storage at the fog layer, which dissociates storage from applications to reduce application load and separates the concern of distributed storage. Our solution includes a replicated data structure supported by a consensus protocol that ensures distributed data consistency and fault tolerance in case of node failures. Finally, we use the UPPAAL verification tool to model and verify the fault tolerance and consistency of our solution.
  •  
3.
  • Thekkilakattil, Abhilash (författare)
  • Limited Preemptive Scheduling in Real-time Systems
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Preemptive and non-preemptive scheduling paradigms typically introduce undesirable side effects when scheduling real-time tasks, mainly in the form of preemption overheads and blocking, that potentially compromise timeliness guarantees. The high preemption overheads in preemptive real-time scheduling may imply high resource utilization, often requiring significant over-provisioning, e.g., pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand, can be infeasible even for tasksets with very low utilization, due to the blocking on higher priority tasks, e.g., when one or more tasks have WCETs greater than the shortest deadline. Limited preemptive scheduling facilitates the reduction of both preemption related overheads as well as blocking by deferring preemptions to favorable locations in the task code.In this thesis, we investigate the feasibility of limited preemptive scheduling of real-time tasks on uniprocessor and multiprocessor platforms. We derive schedulability tests for global limited preemptive scheduling under both Earliest Deadline First (EDF) and Fixed Priority Scheduling (FPS) paradigms. The tests are derived in the context of two major mechanisms for enforcing limited preemptions, viz., defer preemption for a specified duration (i.e., Floating Non-Preemptive Regions) and defer preemption to the next specified location in the task code (i.e., Fixed Preemption Points). Moreover, two major preemption approaches are considered, viz., wait for the lowest priority job to become preemptable (i.e., a Lazy Preemption Approach (LPA)) and preempt the first executing lower priority job that becomes preemptable (i.e., an Eager Preemption Approach (EPA)). Evaluations using synthetically generated tasksets indicate that adopting an eager preemption approach is beneficial in terms of schedulability in the context of global FPS. Further evaluations simulating different global limited preemptive scheduling algorithms expose runtime anomalies with respect to the observed number of preemptions, indicating that limited preemptive scheduling may not necessarily reduce the number of preemptions in multiprocessor systems. We then theoretically quantify the sub-optimality (the worst-case performance) of limited preemptive scheduling on uniprocessor and multiprocessor platforms using resource augmentation, e.g., processor speed-up factors to achieve optimality. Finally, we propose a sensitivity analysis based methodology to control the preemptive behavior of real-time tasks using processor speed-up, in order to satisfy multiple preemption behavior related constraints. The results presented in this thesis facilitate the analysis of limited preemptively scheduled real-time tasks on uniprocessor and multiprocessor platforms.
  •  
4.
  • Čaušević, Adnan, 1981- (författare)
  • Quality of Test Design in Test Driven Development
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • One of the most emphasised software testing activities in an Agile environment is the usage of the Test Driven Development (TDD) approach. TDD is a development activity where test cases are created by developers before writing the code, and all for the purpose of guiding the actual development process. In other words, test cases created when following TDD could be considered as a by-product of software development. However, TDD is not fully adopted by the industry, as indicated by respondents from our industrial survey who pointed out that TDD is the most preferred but least practised activity.Our further research identified seven potentially limiting factors for industrial adoption of TDD, out of which one of the prominent factor was lack of developers’ testing skills. We subsequently defined and categorised appropriate quality attributes which describe the quality of test case design when following TDD. Through a number of empirical studies, we have clearly established the effect of “positive test bias”, where the participants focused mainly on the functionality while generating test cases. In other words, there existed less number of “negative test cases” exercising the system beyond the specified functionality, which is an important requirement for high reliability systems. On an average, in our studies, around 70% of test cases created by the participants were positive while only 30% were negative. However, when measuring defect detecting ability of those sets of test cases, an opposite ratio was observed. Defect detecting ability of negative test cases were above 70% while positive test cases contributed only by 30%.We propose a TDDHQ concept as an approach for achieving higher quality testing in TDD by using combinations of quality improvement aspects and test design techniques to facilitate consideration of unspecified requirements during the development to a higher extent and thus minimise the impact of potentially inherent positive test bias in TDD. This way developers do not necessarily focus only on verifying functionality, but they can as well increase security, robustness, performance and many other quality improvement aspects for the given software product. An additional empirical study, evaluating this method, showed a noticeable improvement in the quality of test cases created by developers utilising TDDHQ concept. Our research findings are expected to pave way for further enhancements to the way of performing TDD, eventually resulting in better adoption of it by the industry.
  •  
5.
  • Desai, Nitin, 1986- (författare)
  • Designing safe and adaptive time-critical fog-based systems
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Safety-critical systems in industrial automation, avionics, or automotive domains demand correct, timely and predictable performance under all(including faulty) operating conditions. Fault-tolerance plays an important role in ensuring seamless system function even in the presence of failures. Typically such systems run hard real-time applications, and hence timing violations can result in hazards.   Fog computing is an adaptive paradigm which distributes computation and communication along the cloud-IoT continuum to reduce communication latencies, making it more conducive to execute real-time applications. This requires enhancements to the network connecting various sub-systems to support timely delivery of safety-critical messages. Traditionally safety-critical systems are designed offline and are not re-configured during runtime. The inherent adaptive properties of fog computing systems make it susceptible to timeliness violations and can be a hindrance to safety guarantees. At the same time, adaptivity in terms of migrating computation and communication to different devices in the fog-cloud continuum can be used to make the system more fault-tolerant by suitable design approaches. In this work we provide design approaches geared towards achieving safety and predictability of critical applications that run on adaptive fog computing platforms. To this end, we start by performing a survey of safety considerations in a fog computing system and identifying key safety challenges. We then propose a design approach to improve predictability in an autonomous mobile robot use-case in a factory setting designed using the fog computing paradigm. We narrow our attention on time-sensitive networking (TSN) and propose a temporal redundancy-based fault tolerance approach for time-sensitive messages. Furthermore, we study the 802.1CB TSN protocol and suggest improvements to reduce network congestion owing to replicated frames.As a future work, we intend to also include the wireless aspects in the evaluation of timeliness guarantees for safety-critical applications. The emphasis will be on run-time failure scenarios and self-healing mechanisms based on online decisions taken in concert with offline guarantees.
  •  
6.
  • Eldh, Sigrid, 1961- (författare)
  • On Evaluating Test Techniques in an Industrial Setting
  • 2007
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Testing is a costly and an important activity in the software industry today. The systems are becoming more complex and the amount of code is constantly increasing. The majority of systems need to rely on its testing to show that it works, is reliable, and performs according to user expectations and specifications.Testing is performed in a multitude of ways, using different test approaches. How testing is conducted becomes essential, when time is limited, since exhaustive testing is not an option in large complex systems, Therefore, the design of the individual test case – and what part and aspect of the system it exercises, is the main focus of testing. Not only do we need to create, and execute test cases efficiently, but we also want them to expose important faults in the system. This main topic of testing has long been a focus of practitioners in industry, and there exists over 70 test techniques that aim to describe how to design a test case. Unfortunately, despite the industrial needs, research on test techniques are seldom performed in large complex systems.The main purpose of this licentiate thesis is to create an environment and framework where it is possible to evaluate test techniques. Our overall goal is to investigate suitable test techniques for different levels, (e.g. component, integration and system level) and to provide guidelines to industry on what is effective, efficient and applicable to test, based on knowledge of failure-fault distribution in a particular domain. In this thesis, our research has been described through four papers that start from a broad overview of typical industrial systems and arrive at a specific focus on how to set up a controlled experiment in an industrial environment. Our initial paper has stated the status of testing in industry, and aided in identifying specific issues as well as underlined the need for further research. We then made experiments with component test improvements, by simple utilization of known approaches (e.g. static analysis, code reviews and statement coverage). This resulted in a substantial cost-reduction and increased quality, and provided us better understanding of the difficulties in deploying known test techniques in reality, which are described in our second paper. These works lead us to our third paper, which describes the framework and process for evaluating test techniques. The first sub-process in this framework deals with how to prepare the experiment with a known set of faults. We aimed to investigate fault classifications to get a useful set of faults of different types to inject. In addition, we investigated real faults reported in an industrial system, performed controlled experiments, and the results were published in our fourth paper.The main contributions of this Licentiate thesis are the valuable insights in the context of evaluation of test techniques, specifically the problems of creating a useful experiment in an industrial setting, in addition to the survey of the state of practice of software testing in Industry. We want to better understand what needs to be done to create efficient evaluations of test techniques, and secondly what is the relation between faults/failures and test techniques. Though our experiments have not yet been able to create ‘the ultimate’ classification for such an aim, the results indicate the appropriateness of this approach. With these valuable insights, we believe that we will be able to direct our future research, to make better evaluations that have a larger potential to generalize and scale.
  •  
7.
  • Eldh, Sigrid, 1961- (författare)
  • On Test Design
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Testing is the dominating method for quality assurance of industrial software. Despite its importance and the vast amount of resources invested, there are surprisingly limited efforts spent on testing research, and the few industrially applicable results that emerge are rarely adopted by industry. At the same time, the software industry is in dire need of better support for testing its software within the limited time available. Our aim is to provide a better understanding of how test cases are created and applied, and what factors really impact the quality of the actual test. The plethora of test design techniques (TDTs) available makes decisions on how to test a difficult choice. Which techniques should be chosen and where in the software should they be applied? Are there any particular benefits of using a specific TDT? Which techniques are effective? Which can you automate? What is the most beneficial way to do a systematic test of a system? This thesis attempts to answer some of these questions by providing a set of guidelines for test design, including concrete suggestions for how to improve testing of industrial software systems, thereby contributing to an improved overall system quality. The guidelines are based on ten studies on the understanding and use of TDTs. The studies have been performed in a variety of system domains and consider several different aspects of software test. For example, we have investigated some of the common mistakes in creating test cases that can lead to poor and costly testing. We have also compared the effectiveness of different TDTs for different types of systems. One of the key factors for these comparisons is a profound understanding of faults and their propagation in different systems. Furthermore, we introduce a taxonomy for TDTs based on their effectiveness (fault finding ability), efficiency (fault finding rate), and applicability. Our goal is to provide an improved basis for making well-founded decisions regarding software testing, together with a better understanding of the complex process of test design and test case writing. Our guidelines are expected to lead to improvements in testing of complex industrial software, as well as to higher product quality and shorter time to market.
  •  
8.
  • Leander, Björn, 1978- (författare)
  • Dynamic Access Control for Industrial Systems
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Industrial automation and control systems (IACS) are taking care of our most important infrastructures, providing electricity and clean water, producing medicine and food, along with many other services and products we take for granted. The continuous, safe, and secure operation of such systems are obviously of great importance. Future iterations of IACS will look quite different from the ones we use today. Modular and flexible systems are emerging, powered by technical advances in areas such as artificial intelligence, cloud computing, and motivated by fluctuating market demands and faster innovation cycles. Design strategies for dynamic manufacturing are increasingly being adopted. These advances have a fundamental impact on industrial systems at component as well as architectural level. As a consequence of the changing operational requirements, the methods used for protection of industrial systems must be revisited and strengthened. This for example includes access control, which is one of the fundamental cyber­security mechanisms that is hugely affected by current developments within IACS. The methods currently used are static and coarse-grained and therefore not well suited for dynamic and flexible industrial systems. A transition in security model is required, from implicit trust towards zero-trust, supporting dynamic and fine-grained access control. This PhD thesis discusses access control for IACS in the age of Industry 4.0, focusing on dynamic and flexible manufacturing systems. The solutions pre­sented are applicable at machine-to-machine as well as human-to-machine in­teractions, using a zero-trust strategy. An investigation of the current state of practice for industrial access control is provided as a starting point for the work. Dynamic systems require equally dynamic access control policies, why several approaches on how dynamic access control can be achieved in indus­trial systems are developed and evaluated, covering strategies for policy for­mulations as well as mechanisms for authorization enforcement. 
  •  
9.
  • Skoglund, Martin (författare)
  • Towards an assessment of safety and security interplay in automated driving systems.
  • 2022
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • We are currently in the midst of significant changes in the road transport system, including the transformation to fossil-free propulsion and the shift to higher levels of automation. The next level in automation is soon upon us and is encompassed by the broader term Connected, Cooperative and Automated Mobility (CCAM) which is relevant for the entire transportation system. The introduction of CCAM has the potential to contribute significantly to crucial UN Sustainable Development Goals. For the automotive domain, the term Automated Driving Systems (ADS) is often used for highly automated vehicles. Notwithstanding the expected positive effects and the extraordinary efforts, highly automated driving systems are still not publicly available except in pilot programs.The increased complexity in the higher automation levels can be ascribed to the shift from fail-safe operator support to fail-operational systems that assume the operator's role, utilising new sensors and algorithms for perception and the reliance on connectivity to solve the problem task. Here the solution is also the problem, i.e. complex systems. The complexity of the systems and difficulties in capturing a complete practical description of the environment where the systems are intended to operate pose difficulties in defining validation procedures for ADS technologies' safety, security, and trustworthiness.Parallel to traditional safety issues, there is now a need to consider the quality of cybersecurity, e.g. due to external communication and environmental sensors being susceptible to remote attacks. A security problem may enable a hacker to incapacitate or fool an ADS resulting in unsafe behaviour. In addition to malicious misuse, the development of environment sensing has to consider functional insufficiencies of the employed sensor technologies. Therefore, both safety and security and their interplay must be addressed in developing the solutions.The first step in gaining public confidence in the technologies involved is to raise user awareness. Therefore there is a need to be transparent and explicit on the evaluation targets and the associated supporting evidence of safe and secure ADS. An assessment of safety and security properties performed by an independent organisation can be an essential step towards establishing trust in ADS solutions, bridging the gap between the marketing portrayal and the actual performance of such systems in operating conditions.This licentiate thesis contributes towards the overall goal of improving the assessment target and the associated supporting evidence of a safe and secure ADS in the automotive domain by (1) assessing requirements for safety, security and their interplay on key enabling technologies, (2) introducing an argument pattern enabling safety, security and interaction overlap to be jointly addressed, (3) proposing a method that enables assessment of security informed safety an independent agency.
  •  
10.
  • Svenningsson, Rickard (författare)
  • Model-Implemented Fault Injection for Robustness Assessment
  • 2011
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The complexity of safety-related embedded computer systems is steadilyincreasing. Besides verifying that such systems implement the correct functionality, it is essential to verify that they also present an acceptable level of robustness. Robustness is in this thesis defined as the resilience of hardware, software or systems against errors that occur during runtime. One way of performing robustness assessment is to carry out fault injection, also known as fault insertion testing from certain safety standards. The idea behind fault injection is to accelerate the occurrence of faults in the system to evaluate its behavior under the influence of anticipated faults, and to evaluate error handling mechanisms. Model-based development is becoming more and more common for the development of safety-related software. Thus, in this thesis we investigate how we can benefit from conducting fault injection experiments on behavior models of software. This is defined as model-implemented fault injection in this thesis, since additional model artifacts are added to support the injection of faults that are activated during simulation. In particular, this thesis addresses injection of hardware fault effects (e.g. bit-level errors in microcontrollers) into Simulink® models. To evaluate the method, a fault injection tool has been developed (called MODIFI), that is able to perform fault injection into Simulink behavior models. MODIFI imports tailored fault libraries that define the effects of faults according to an XML-schema. The fault libraries are converted into executable model blocks that are added to behavior models and activated during runtime to emulate the effect of faults. Further, we use a method called minimal cut sets generation to increase the usefulness of the tool. During the work within MOGENTES, an EU 7th framework programme project that focused on model-based generation of test cases for dependable embedded systems, fault injection experiments have been performed on safety related models with the MODIFI tool. Experiments were also performed using traditional fault injection methods, and in particular hardware-implemented fault injection, to evaluate the correlation between the methods. The results reveal that fault injection on software models is efficient and useful for robustness assessment and that results produced with MODIFI appear to be representative for the results obtained with other fault injection methods. However, a software model suppresses implementation details, thus leading to fewer locations where faults can be injected. Therefore it cannot entirely replace traditional fault injection methods, but by performing model-implemented fault injection in early design phases an overview of the robustness of a model can be obtained, given these limitations. It can also be useful for testing of error handling mechanisms that are implemented in the behavior model.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 13

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy