Sökning: onr:"swepub:oai:DiVA.org:kth-342643" > Improving the Perfo...
Fältnamn | Indikatorer | Metadata |
---|---|---|
000 | 03522naa a2200385 4500 | |
001 | oai:DiVA.org:kth-342643 | |
003 | SwePub | |
008 | 240125s2023 | |||||||||||000 ||eng| | |
024 | 7 | a https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3426432 URI |
024 | 7 | a https://doi.org/10.1109/IROS55552.2023.103423192 DOI |
040 | a (SwePub)kth | |
041 | a engb eng | |
042 | 9 SwePub | |
072 | 7 | a ref2 swepub-contenttype |
072 | 7 | a kon2 swepub-publicationtype |
100 | 1 | a Kartasev, Martu KTH,Robotik, perception och lärande, RPL4 aut0 (Swepub:kth)u10b731a |
245 | 1 0 | a Improving the Performance of Backward Chained Behavior Trees that use Reinforcement Learning |
264 | 1 | b Institute of Electrical and Electronics Engineers (IEEE),c 2023 |
338 | a print2 rdacarrier | |
500 | a Part of ISBN 978-1-6654-9190-7QC 20240130 | |
520 | a In this paper we show how to improve the performance of backward chained behavior trees (BTs) that include policies trained with reinforcement learning (RL). BTs represent a hierarchical and modular way of combining control policies into higher level control policies. Backward chaining is a design principle for the construction of BTs that combines reactivity with goal directed actions in a structured way. The backward chained structure has also enabled convergence proofs for BTs, identifying a set of local conditions to be satisfied for the convergence of all trajectories to a set of desired goal states. The key idea of this paper is to improve performance of backward chained BTs by using the conditions identified in a theoretical convergence proof to configure the RL problems for individual controllers. Specifically, previous analysis identified so-called active constraint conditions (ACCs), that should not be violated in order to avoid having to return to work on previously achieved subgoals. We propose a way to set up the RL problems, such that they do not only achieve each immediate subgoal, but also avoid violating the identified ACCs. The resulting performance improvement depends on how often ACC violations occurred before the change, and how much effort, in terms of execution time, was needed to re-achieve them. The proposed approach is illustrated in a dynamic simulation environment. | |
650 | 7 | a TEKNIK OCH TEKNOLOGIERx Elektroteknik och elektronikx Robotteknik och automation0 (SwePub)202012 hsv//swe |
650 | 7 | a ENGINEERING AND TECHNOLOGYx Electrical Engineering, Electronic Engineering, Information Engineeringx Robotics0 (SwePub)202012 hsv//eng |
650 | 7 | a NATURVETENSKAPx Data- och informationsvetenskapx Datavetenskap0 (SwePub)102012 hsv//swe |
650 | 7 | a NATURAL SCIENCESx Computer and Information Sciencesx Computer Sciences0 (SwePub)102012 hsv//eng |
653 | a Artificial Intelligence | |
653 | a Autonomous systems | |
653 | a Behavior trees | |
653 | a Reinforcement learning | |
700 | 1 | a Salér, Justinu KTH,Robotik, perception och lärande, RPL4 aut0 (Swepub:kth)u1puhfk3 |
700 | 1 | a Ögren, Petter,d 1974-u KTH,Optimeringslära och systemteori,Robotik, perception och lärande, RPL4 aut0 (Swepub:kth)u1izkr9z |
710 | 2 | a KTHb Robotik, perception och lärande, RPL4 org |
773 | 0 | t 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023d : Institute of Electrical and Electronics Engineers (IEEE)g , s. 1572-1579q <1572-1579 |
856 | 4 8 | u https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-342643 |
856 | 4 8 | u https://doi.org/10.1109/IROS55552.2023.10342319 |
Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.