SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:ri-65535"
 

Search: onr:"swepub:oai:DiVA.org:ri-65535" > Improved deep reinf...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Improved deep reinforcement learning for car-following decision-making

Yang, Xiaoxue (author)
Tongji University, China
Zou, Yajie (author)
Tongji University, China
Zhang, Hao (author)
Tongji University, China
show more...
Qu, Xiaobo (author)
Tsinghua University, China
Chen, Lei (author)
RISE,Mobilitet och system
show less...
 (creator_code:org_t)
Elsevier B.V. 2023
2023
English.
In: Physica A. - : Elsevier B.V.. - 0378-4371 .- 1873-2119. ; 624
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Accuracy improvement of Car-following (CF) model has attracted much attention in recent years. Although a few studies incorporate deep reinforcement learning (DRL) to describe CF behaviors, proper design of reward function is still an intractable problem. This study improves the deep deterministic policy gradient (DDPG) car-following model with stacked denoising autoencoders (SDAE), and proposes a data-driven reward representation function, which quantifies the implicit interaction between ego vehicle and preceding vehicle in car-following process. The experimental results demonstrate that DDPG-SDAE model has superior ability of imitating driving behavior: (1) validating effectiveness of the reward representation method with low deviation of trajectory; (2) demonstrating generalization ability on two different trajectory datasets (HighD and SPMD); (3) adapting to three traffic scenarios clustered by a dynamic time warping distance based k-medoids method. Compared with Recurrent Neural Networks (RNN) and intelligent driver model (IDM), DDPG-SDAE model shows better performance on the deviation of speed and relative distance. This study demonstrates superiority of a novel reward extraction method fusing SDAE into DDPG algorithm and provides inspiration for developing driving decision-making model. © 2023 Elsevier B.V.

Subject headings

TEKNIK OCH TEKNOLOGIER  -- Samhällsbyggnadsteknik -- Transportteknik och logistik (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Civil Engineering -- Transport Systems and Logistics (hsv//eng)

Keyword

Car-following model
Deep reinforcement learning
Driving behavior imitation
Stacked denoising autoencoders
Behavioral research
Decision making
Learning systems
Recurrent neural networks
Auto encoders
Car-following modeling
De-noising
Deterministics
Driving behaviour
Policy gradient
Reinforcement learnings
Stacked denoising autoencoder
Reinforcement learning

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

  • Physica A (Search for host publication in LIBRIS)

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Yang, Xiaoxue
Zou, Yajie
Zhang, Hao
Qu, Xiaobo
Chen, Lei
About the subject
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Civil Engineerin ...
and Transport System ...
Articles in the publication
Physica A
By the university
RISE

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view