SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Hoel Carl Johan 1986) "

Sökning: WFRF:(Hoel Carl Johan 1986)

  • Resultat 1-10 av 10
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Schröder, Jan, 1986, et al. (författare)
  • Design and Evaluation of a Customizable Multi-Domain Reference Architecture on top of Product Lines of Self- Driving Heavy Vehicles - An Industrial Case Study
  • 2015
  • Ingår i: Proceedings - International Conference on Software Engineering. - : IEEE. - 0270-5257. - 9781479919345 ; 2:O/IEC, 2000, ISO/IEC FDIS 9126-1:2000(E), s. 189-198
  • Konferensbidrag (refereegranskat)abstract
    • Self-driving vehicles for commercial use cases like logistics or overcast mines increase their owners' economic competitiveness. Volvo maintains, evolves, and distributes a vehicle control product line for different brands like Volvo Trucks, Renault, and Mack in more than 190 markets world-wide. From the different application domains of their customers originates the need for a multi-domain reference architecture concerned with transport mission planning, execution, and tracking on top of the vehicle control product line. This industrial case study is the first of its kind reporting about the systematic process to design such a reference architecture involving all relevant external and internal stakeholders, development documents, low-level artifacts, and literature. Quantitative and qualitative metrics were applied to evaluate non-functional requirements on the reference architecture level before a concrete variant was evaluated using a Volvo FMX truck in an exemplary construction site setting.
  •  
2.
  • Hoel, Carl-Johan, 1986, et al. (författare)
  • Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving
  • 2020
  • Ingår i: IEEE Transactions on Intelligent Vehicles. - 2379-8858. ; 5:2, s. 294-305
  • Tidskriftsartikel (refereegranskat)abstract
    • Tactical decision making for autonomous driving is challenging due to the diversity of environments, the uncertainty in the sensor information, and the complex interaction with other road users. This article introduces a general framework for tactical decision making, which combines the concepts of planning and learning, in the form of Monte Carlo tree search and deep reinforcement learning. The method is based on the AlphaGo Zero algorithm, which is extended to a domain with a continuous state space where self-play cannot be used. The framework is applied to two different highway driving cases in a simulated environment and it is shown to perform better than a commonly used baseline method. The strength of combining planning and learning is also illustrated by a comparison to using the Monte Carlo tree search or the neural network policy separately.
  •  
3.
  • Hoel, Carl-Johan E, 1986, et al. (författare)
  • An Evolutionary Approach to General-Purpose Automated Speed and Lane Change Behavior
  • 2017
  • Ingår i: Proceedings of 16th IEEE International Conference On Machine Learning And Applications (ICMLA). ; 2017-December
  • Konferensbidrag (refereegranskat)abstract
    • This paper introduces a method for automatically training a general-purpose driver model, applied to the case of a truck-trailer combination. A genetic algorithm is used to optimize a structure of rules and actions, and their parameters, to achieve the desired driving behavior. The training is carried out in a simulated environment, using a two-stage process. The method is then applied to a highway driving case, where it is shown that it generates a model that matches or surpasses the performance of a commonly used reference model. Furthermore, the generality of the model is demonstrated by applying it to an overtaking situation on a rural road with oncoming traffic.
  •  
4.
  • Hoel, Carl-Johan E, 1986, et al. (författare)
  • Automated Speed and Lane Change Decision Making using Deep Reinforcement Learning
  • 2018
  • Ingår i: IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC. ; 2018-November, s. 2148-2155
  • Konferensbidrag (refereegranskat)abstract
    • This paper introduces a method, based on deep reinforcement learning, for automatically generating a general purpose decision making function. A Deep Q-Network agent was trained in a simulated environment to handle speed and lane change decisions for a truck-trailer combination. In a highway driving case, it is shown that the method produced an agent that matched or surpassed the performance of a commonly used reference model. To demonstrate the generality of the method, the exact same algorithm was also tested by training it for an overtaking case on a road with oncoming traffic. Furthermore, a novel way of applying a convolutional neural network to high level input that represents interchangeable objects is also introduced. https://arxiv.org/abs/1803.10056
  •  
5.
  • Hoel, Carl-Johan E, 1986 (författare)
  • Decision-Making in Autonomous Driving using Reinforcement Learning
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The main topic of this thesis is tactical decision-making for autonomous driving. An autonomous vehicle must be able to handle a diverse set of environments and traffic situations, which makes it hard to manually specify a suitable behavior for every possible scenario. Therefore, learning-based strategies are considered in this thesis, which introduces different approaches based on reinforcement learning (RL). A general decision-making agent, derived from the Deep Q-Network (DQN) algorithm, is proposed. With few modifications, this method can be applied to different driving environments, which is demonstrated for various simulated highway and intersection scenarios. A more sample efficient agent can be obtained by incorporating more domain knowledge, which is explored by combining planning and learning in the form of Monte Carlo tree search and RL. In different highway scenarios, the combined method outperforms using either a planning or a learning-based strategy separately, while requiring an order of magnitude fewer training samples than the DQN method. A drawback of many learning-based approaches is that they create black-box solutions, which do not indicate the confidence of the agent's decisions. Therefore, the Ensemble Quantile Networks (EQN) method is introduced, which combines distributional RL with an ensemble approach, to provide an estimate of both the aleatoric and the epistemic uncertainty of each decision. The results show that the EQN method can balance risk and time efficiency in different occluded intersection scenarios, while also identifying situations that the agent has not been trained for. Thereby, the agent can avoid making unfounded, potentially dangerous, decisions outside of the training distribution. Finally, this thesis introduces a neural network architecture that is invariant to permutations of the order in which surrounding vehicles are listed. This architecture improves the sample efficiency of the agent by the factorial of the number of surrounding vehicles.
  •  
6.
  • Hoel, Carl-Johan E, 1986, et al. (författare)
  • Ensemble Quantile Networks: Uncertainty-Aware Reinforcement Learning With Applications in Autonomous Driving
  • 2023
  • Ingår i: IEEE Transactions on Intelligent Transportation Systems. - 1524-9050 .- 1558-0016. ; 24:6, s. 6030-6041
  • Tidskriftsartikel (refereegranskat)abstract
    • Reinforcement learning (RL) can be used to create a decision-making agent for autonomous driving. However, previous approaches provide black-box solutions, which do not offer information on how confident the agent is about its decisions. An estimate of both the aleatoric and epistemic uncertainty of the agent’s decisions is fundamental for real-world applications of autonomous driving. Therefore, this paper introduces the Ensemble Quantile Networks (EQN) method, which combines distributional RL with an ensemble approach, to obtain a complete uncertainty estimate. The distribution over returns is estimated by learning its quantile function implicitly, which gives the aleatoric uncertainty, whereas an ensemble of agents is trained on bootstrapped data to provide a Bayesian estimation of the epistemic uncertainty. A criterion for classifying which decisions that have an unacceptable uncertainty is also introduced. The results show that the EQN method can balance risk and time efficiency in different occluded intersection scenarios, by considering the estimated aleatoric uncertainty. Furthermore, it is shown that the trained agent can use the epistemic uncertainty information to identify situations that the agent has not been trained for and thereby avoid making unfounded, potentially dangerous, decisions outside of the training distribution.
  •  
7.
  • Hoel, Carl-Johan E, 1986, et al. (författare)
  • Low speed maneuvering assistance for long vehicle combinations
  • 2013
  • Ingår i: IEEE Intelligent Vehicles Symposium, Proceedings. ; , s. 598-604
  • Konferensbidrag (refereegranskat)abstract
    • This paper considers a low speed maneuvering problem for long articulated vehicle combinations. High precision maneuvering is achieved by designing a model-based state feedback optimal control method, commanding the steering of the first unit and a moveable coupling point between the first unit and the trailer. Simulation results are presented for a tight 90 degree turn, involving both forward and backward motions.
  •  
8.
  • Hoel, Carl-Johan E, 1986, et al. (författare)
  • Reinforcement Learning with Uncertainty Estimation for Tactical Decision-Making in Intersections
  • 2020
  • Ingår i: IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC.
  • Konferensbidrag (refereegranskat)abstract
    • This paper investigates how a Bayesian reinforcement learning method can be used to create a tactical decision-making agent for autonomous driving in an intersection scenario, where the agent can estimate the confidence of its decisions. An ensemble of neural networks, with additional randomized prior functions (RPF), are trained by using a bootstrapped experience replay memory. The coefficient of variation in the estimated Q-values of the ensemble members is used to approximate the uncertainty, and a criterion that determines if the agent is sufficiently confident to make a particular decision is introduced. The performance of the ensemble RPF method is evaluated in an intersection scenario and compared to a standard Deep Q-Network method, which does not estimate the uncertainty. It is shown that the trained ensemble RPF agent can detect cases with high uncertainty, both in situations that are far from the training distribution, and in situations that seldom occur within the training distribution. This work demonstrates one possible application of such a confidence estimate, by using this information to choose safe actions in unknown situations, which removes all collisions from within the training distribution, and most collisions outside of the distribution.
  •  
9.
  • Hoel, Carl-Johan E, 1986 (författare)
  • Tactical decision-making for autonomous driving: A reinforcement learning approach
  • 2019
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The tactical decision-making task of an autonomous vehicle is challenging, due to the diversity of the environments the vehicle operates in, the uncertainty in the sensor information, and the complex interaction with other road users. This thesis introduces and compares three general approaches, based on reinforcement learning, to creating a tactical decision-making agent. The first method uses a genetic algorithm to automatically generate a rule based decision-making agent, whereas the second method is based on a Deep Q-Network agent. The third method combines the concepts of planning and learning, in the form of Monte Carlo tree search and deep reinforcement learning. The three approaches are applied to several highway driving cases in a simulated environment and outperform a commonly used baseline model by taking decisions that allow the vehicle to navigate 5% to 10% faster through dense traffic. However, the main advantage of the methods is their generality, which is indicated by applying them to conceptually different driving cases. Furthermore, this thesis introduces a novel way of applying a convolutional neural network architecture to a high level state description of interchangeable objects, which speeds up the learning process and eliminates all collisions in the test cases.
  •  
10.
  • Hoel, Carl-Johan E, 1986, et al. (författare)
  • Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation
  • 2020
  • Ingår i: IEEE Intelligent Vehicles Symposium, Proceedings. ; , s. 1563-1569
  • Konferensbidrag (refereegranskat)abstract
    • Reinforcement learning (RL) can be used to create a tactical decision-making agent for autonomous driving. However, previous approaches only output decisions and do not provide information about the agent's confidence in the recommended actions. This paper investigates how a Bayesian RL technique, based on an ensemble of neural networks with additional randomized prior functions (RPF), can be used to estimate the uncertainty of decisions in autonomous driving. A method for classifying whether or not an action should be considered safe is also introduced. The performance of the ensemble RPF method is evaluated by training an agent on a highway driving scenario. It is shown that the trained agent can estimate the uncertainty of its decisions and indicate an unacceptable level when the agent faces a situation that is far from the training distribution. Furthermore, within the training distribution, the ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study, the estimated uncertainty is used to choose safe actions in unknown situations. However, the uncertainty information could also be used to identify situations that should be added to the training process.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 10

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy