SwePub
Sök i LIBRIS databas

  Utökad sökning

id:"swepub:oai:DiVA.org:kth-283894"
 

Sökning: id:"swepub:oai:DiVA.org:kth-283894" > Logistics-involved ...

Logistics-involved QoS-aware service composition in cloud manufacturing with deep reinforcement learning

Liang, Huagang (författare)
Wen, Xiaoqian (författare)
Liu, Yongkui (författare)
visa fler...
Zhang, Haifeng (författare)
Zhang, Lin (författare)
Wang, Lihui (författare)
KTH,Hållbara produktionssystem
visa färre...
 (creator_code:org_t)
PERGAMON-ELSEVIER SCIENCE LTD, 2021
2021
Engelska.
Ingår i: Robotics and Computer-Integrated Manufacturing. - : PERGAMON-ELSEVIER SCIENCE LTD. - 0736-5845 .- 1879-2537. ; 67
  • Tidskriftsartikel (refereegranskat)
Abstract Ämnesord
Stäng  
  • Cloud manufacturing is a new manufacturing model that aims to provide on-demand manufacturing services to consumers over the Internet. Service composition is an essential issue as well as an important technique in cloud manufacturing (CMfg) that supports construction of larger-granularity, value-added services by combining a number of smaller-granularity services to satisfy consumers' complex requirements. Meta-heuristics algorithms such as genetic algorithm, particle swarm optimization, and ant colony algorithm are frequently employed for addressing service composition issues in cloud manufacturing. These algorithms, however, require complex design flows and painstaking parameter tuning, and lack adaptability to dynamic environment. Deep re-inforcement learning (DRL) provides an alternative approach for solving cloud manufacturing service compo-sition (CMfg-SC) issues. DRL as model-free artificial intelligent methods enables a system to learn optimal service composition solutions through training, which can therefore circumvent the aforementioned problems with meta-heuristics algorithms. This paper is dedicated to exploring possible applications of DRL in CMfg-SC. A logistics-involved QoS-aware DRL-based CMfg-SC is proposed. A dueling Deep Q-Network (DQN) with prior-itized replay named PD-DQN is designed as the DRL algorithm. Effectiveness, robustness, adaptability, and scalability of PD-DQN are investigated, and compared with that of the basic DQN and Q-learning. Experimental results indicate that PD-DQN is able to effectively address the CMfg-SC problem.

Nyckelord

Cloud manufacturing
service composition
deep reinforcement learning
deep Q-network

Publikations- och innehållstyp

ref (ämneskategori)
art (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy