SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:kth-325693"
 

Search: onr:"swepub:oai:DiVA.org:kth-325693" > Adaptive Stochastic...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist
  • Lei, WanluKTH,Teknisk informationsvetenskap,Interconnection Design in Baseband and Interconnect Department, Ericsson AB, Stockholm, Sweden (author)

Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in Edge IoT

  • Article/chapterEnglish2022

Publisher, publication year, extent ...

  • Institute of Electrical and Electronics Engineers (IEEE),2022
  • printrdacarrier

Numbers

  • LIBRIS-ID:oai:DiVA.org:kth-325693
  • https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-325693URI
  • https://doi.org/10.1109/JIOT.2022.3187067DOI

Supplementary language notes

  • Language:English
  • Summary in:English

Part of subdatabase

Classification

  • Subject category:ref swepub-contenttype
  • Subject category:art swepub-publicationtype

Notes

  • QC 20230412
  • Edge computing provides a promising paradigm to support the implementation of Internet of Things (IoT) by offloading tasks to nearby edge nodes. Meanwhile, the increasing network size makes it impractical for centralized data processing due to limited bandwidth, and consequently a decentralized learning scheme is preferable. Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes. For RL in a decentralized setup, edge nodes (agents) connected through a communication network aim to work collaboratively to find a policy to optimize the global reward as the sum of local rewards. However, communication costs, scalability, and adaptation in complex environments with heterogeneous agents may significantly limit the performance of decentralized RL. Alternating direction method of multipliers (ADMM) has a structure that allows for decentralized implementation and has shown faster convergence than gradient descent-based methods. Therefore, we propose an adaptive stochastic incremental ADMM (asI-ADMM) algorithm and apply the asI-ADMM to decentralized RL with edge-computing-empowered IoT networks. We provide convergence properties for the proposed algorithms by designing a Lyapunov function and prove that the asI-ADMM has O(1/k) + O(1/M) convergence rate, where k and M are the number of iterations and batch samples, respectively. Then, we test our algorithm with two supervised learning problems. For performance evaluation, we simulate two applications in decentralized RL settings with homogeneous and heterogeneous agents. The experimental results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability and can well adapt to complex IoT environments. 

Subject headings and genre

Added entries (persons, corporate bodies, meetings, titles ...)

  • Ye, YuKTH,Teknisk informationsvetenskap(Swepub:kth)u1y1pv31 (author)
  • Xiao, Ming,1975-KTH,Teknisk informationsvetenskap(Swepub:kth)u1iq6n9a (author)
  • Skoglund, Mikael,1969-KTH,Teknisk informationsvetenskap(Swepub:kth)u1dbnyps (author)
  • Han, Z. (author)
  • KTHTeknisk informationsvetenskap (creator_code:org_t)

Related titles

  • In:IEEE Internet of Things Journal: Institute of Electrical and Electronics Engineers (IEEE)9:22, s. 22958-229712327-4662

Internet link

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Lei, Wanlu
Ye, Yu
Xiao, Ming, 1975 ...
Skoglund, Mikael ...
Han, Z.
About the subject
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Electrical Engin ...
and Control Engineer ...
Articles in the publication
IEEE Internet of ...
By the university
Royal Institute of Technology

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view