SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Mallozzi Piergiuseppe 1990) "

Sökning: WFRF:(Mallozzi Piergiuseppe 1990)

  • Resultat 1-10 av 12
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments
  • 2019
  • Ingår i: RoSE 2019, IEEE/ACM 2nd International Workshop on Robotics Software Engineering, p.5-12. - : IEEE. - 9781728122496
  • Konferensbidrag (refereegranskat)abstract
    • © 2019 IEEE. Without prior knowledge of the environment, a software agent can learn to achieve a goal using machine learning. Model-free Reinforcement Learning (RL) can be used to make the agent explore the environment and learn to achieve its goal by trial and error. Discovering effective policies to achieve the goal in a complex environment is a major challenge for RL. Furthermore, in safety-critical applications, such as robotics, an unsafe action may cause catastrophic consequences in the agent or in the environment. In this paper, we present an approach that uses runtime monitoring to prevent the reinforcement learning agent to perform 'wrong' actions and to exploit prior knowledge to smartly explore the environment. Each monitor is de?ned by a property that we want to enforce to the agent and a context. The monitors are orchestrated by a meta-monitor that activates and deactivates them dynamically according to the context in which the agent is learning. We have evaluated our approach by training the agent in randomly generated learning environments. Our results show that our approach blocks the agent from performing dangerous and safety-critical actions in all the generated environments. Besides, our approach helps the agent to achieve its goal faster by providing feedback and shaping its reward during learning.
  •  
2.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • Autonomous vehicles: state of art, future trends, and challenges
  • 2019
  • Ingår i: Automotive Systems and Software Engineering: State of the Art and Future Trends. - Cham : Springer International Publishing. - 9783030121570 ; , s. 347-367
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Autonomous vehicles are considered to be the next big thing. Several companies are racing to put self-driving vehicles on the road by 2020. Regulations and standards are not ready for such a change. New technologies, such as the intensive use of machine learning, are bringing new solutions but also opening new challenges. This paper reports the state of the art, future trends, and challenges of autonomous vehicles, with a special focus on software. One of the major challenges we further elaborate on is using machine learning techniques in order to deal with uncertainties that characterize the environments in which autonomous vehicles will need to operate while guaranteeing safety properties.auto
  •  
3.
  • Mallozzi, Piergiuseppe, 1990 (författare)
  • Combining machine-learning with invariants assurance techniques for autonomous systems
  • 2017
  • Ingår i: Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering Companion, ICSE-C 2017. - 9781538615898 ; , s. 485-486
  • Konferensbidrag (refereegranskat)abstract
    • Autonomous Systems are systems situated in some environment and are able of taking decision autonomously. The environment is not precisely known at design-time and it might be full of unforeseeable events that the autonomous system has to deal with at run-time. This brings two main problems to be addressed. One is that the uncertainty of the environment makes it difficult to model all the behaviours that the autonomous system might have at the design-time. A second problem is that, especially for safety-critical systems, maintaining the safety requirements is fundamental despite the system's adaptations. We address such problems by shifting some of the assurance tasks at run-time. We propose a method for delegating part of the decision making to agent-based algorithms using machine learning techniques. We then monitor at run-time that the decisions do not violate the autonomous system's safety-critical requirements and by doing so we also send feedback to the decision-making process so that it can learn. We plan to implement this approach using reinforcement learning for decision making and predictive monitoring for checking at run-time the preservation and/or violation of invariant properties of the system. We also plan to validate it using ROS as software middleware and miniaturized vehicles and real vehicles as hardware.
  •  
4.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • CROME: Contract-Based Robotic Mission Specification
  • 2020
  • Ingår i: 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design, MEMOCODE 2020. - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • We address the problem of automatically constructing a formal robotic mission specification in a logic language with precise semantics starting from an informal description of the mission requirements. We present CROME (Contract-based RObotic Mission spEcification), a framework that allows capturing mission requirements in terms of goals by using specification patterns, and automatically building linear temporal logic mission specifications conforming with the requirements. CROME leverages a new formal model, termed Contract-based Goal Graph (CGG), which enables organizing the requirements in a modular way with a rigorous compositional semantics. By relying on the CGG, it is then possible to automatically: i) check the feasibility of the overall mission, ii) further refine it from a library of pre-defined goals, and iii) synthesize multiple controllers that implement different parts of the mission at different abstraction levels, when the specification is realizable. If the overall mission is not realizable, CROME identifies mission scenarios, i.e., sub-missions that can be realizable. We illustrate the effectiveness of our methodology and supporting tool on a case study.
  •  
5.
  • Mallozzi, Piergiuseppe, 1990 (författare)
  • Designing Trustworthy Autonomous Systems
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment. Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct.
  •  
6.
  • Mallozzi, Piergiuseppe, 1990 (författare)
  • Engineering Trustworthy Self-Adaptive Autonomous Systems
  • 2018
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Autonomous Systems (AS) are becoming ubiquitous in our society. Some examples are autonomous vehicles, unmanned aerial vehicles (UAV), autonomous trading systems, self-managing Telecom networks and smart factories. Autonomous Systems are based on a continuous interaction with the environment in which they are deployed, and more often than not this environment can be dynamic and partially unknown. AS must be able to take decisions autonomously at run-time also in presence of uncertainty. Software is the main enabler of AS and it allows the AS to self-adapt in response to changes in the environment and to evolve, via the deployment of new features. Traditionally, software development techniques are based on a complete description at design time of how the system must behave in different environmental conditions. This is no longer effective since the system has to be able to explore and learn from the environment in which it is operating also after its deployment. Reinforcement learning (RL) algorithms discover policies that can lead AS to achieve their goals in a dynamic and unknown environment. The developer does not specify anymore how the system should act in each possible situation but rather the RL algorithm can achieve an optimal behaviour by trial and error. Once trained, the AS will be capable of taking decisions and performing actions autonomously while still learning from the environment. These systems are becoming increasingly powerful, yet this flexibility comes at a cost: the learned policy does not necessarily guarantee safety or the achievement of the goals. This thesis explores the problem of building trustworthy autonomous systems from different angles. Firstly, we have identified the state of the art and challenges of building autonomous systems, with a particular focus on autonomous vehicles. Then, we have analysed how current approaches of formal verification can provide assurances in a System of Systems scenario. Finally, we have proposed methods that combine formal verification with reinforcement learning agents to address two major challenges: how to trust that an autonomous system will be able to achieve its goals and how to ensure that the behaviour of AS is safe.
  •  
7.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • Formal verification of the on-the-fly vehicle platooning protocol
  • 2016
  • Ingår i: Lecture Notes in Computer Science - Proceedings of the International Workshop on Software Engineering for Resilient Systems (Serene 2016). - Cham : Springer International Publishing. - 0302-9743 .- 1611-3349. - 9783319458915
  • Konferensbidrag (refereegranskat)abstract
    • Future transportation systems are expected to be Systems of Systems (SoSs) composed of vehicles, pedestrians, roads, signs and other parts of the infrastructure. The boundaries of such systems change frequently and unpredictably and they have to cope with different degrees of uncertainty. At the same time, these systems are expected to function correctly and reliably. This is why designing for resilience is becoming extremely important for these systems. One example of SoS collaboration is the vehicle platooning, a promising concept that will help us dealing with traffic congestion in the near future. Before deploying such scenarios on real roads, vehicles must be guaranteed to act safely, hence their behaviour must be verified. In this paper, we describe a vehicle platooning protocol focusing especially on dynamic leader negotiation and message propagation. We have represented the vehicles behaviours with timed automata so that we are able to formally verifying the correctness through the use of model checking.
  •  
8.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • Incremental Refinement of Goal Models with Contracts
  • 2021
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783030892463 ; 12818 LNCS, s. 35-50
  • Konferensbidrag (refereegranskat)abstract
    • Goal models and contracts offer complementary approaches to requirement analysis. Goal modeling has been effectively used to capture designer's intents and their hierarchical structure. Contracts emphasize modularity and formal representations of the interactions between system components. In this paper, we present CoGoMo(Contract-based Goal Modeling), a framework for systematic requirement analysis, which leverages a new formal model, termed contract-based goal tree, to represent goal models in terms of hierarchies of contracts. Based on this model, we propose algorithms that use contract operations and relations to check goal consistency and completeness, and support incremental and hierarchical refinement of goals from a library of goals. Model and algorithms are implemented in a tool which enables incremental formalization and refinement of goals from a web interface. We show the effectiveness of our approach on an illustrative example motivated by vehicle platooning.
  •  
9.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • Keeping intelligence under control
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. - 9781450357401 ; , s. 37-40
  • Konferensbidrag (refereegranskat)abstract
    • Modern software systems, such as smart systems, are based on a continuous interaction with the dynamic and partially unknown environment in which they are deployed. Classical development techniques, based on a complete description of how the system must behave in different environmental conditions, are no longer effective. On the contrary, modern techniques should be able to produce systems that autonomously learn how to behave in different environmental conditions. Machine learning techniques allow creating systems that learn how to execute a set of actions to achieve a desired goal. When a change occurs, machine learning techniques allow the system to autonomously learn new policies and strategies for actions execution. This flexibility comes at a cost: the developer has no longer full control on the system behaviour. Thus, there is no way to guarantee that the system will not violate important properties, such as safety-critical properties. To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do not violate safety-critical requirements. This paper proposes an approach that combines machine learning with run-time monitoring to detect violations of system invariants in the actions execution policies.
  •  
10.
  • Mallozzi, Piergiuseppe, 1990, et al. (författare)
  • MoVEMo - A structured approach for engineering reward functions
  • 2018
  • Ingår i: Proceedings - 2nd IEEE International Conference on Robotic Computing, IRC 2018. - New York : IEEE. - 9781538646519 ; 2018, s. 250-257
  • Konferensbidrag (refereegranskat)abstract
    • Reinforcement learning (RL) is a machine learning technique that has been increasingly used in robotic systems. In reinforcement learning, instead of manually pre-program what action to take at each step, we convey the goal of a software agent in terms of reward functions. The agent tries different actions in order to maximize a numerical value, i.e. the reward. A misspecified reward function can cause problems such as reward hacking, where the agent finds out ways that maximize the reward without achieving the intended goal. As RL agents become more general and autonomous, the design of reward functions that elicit the desired behaviour in the agent becomes more important and cumbersome. In this paper, we present a technique to formally express reward functions in a structured way; this stimulates a proper reward function design and as well enables the formal verification of it. We start by defining the reward function using state machines. In this way, we can statically check that the reward function satisfies certain properties, e.g., high-level requirements of the function to learn. Later we automatically generate a runtime monitor which runs in parallel with the learning agent-that provides the rewards according to the definition of the state machine and based on the behaviour of the agent. We use the Uppaal model checker to design the reward model and verify the TCTL properties that model high-level requirements of the reward function and Larva to monitor and enforce the reward model to the RL agent at runtime.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy