Sökning: WFRF:(Cai Dongsheng) >
DynaComm: Accelerat...
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling
-
- Cai, Shangming (författare)
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
-
- Wang, Dongsheng (författare)
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China; Cyberspace Security Research Center, Peng Cheng Laboratory, Shenzhen 518066, China
-
- Wang, Haixia (författare)
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
-
visa fler...
-
- Lyu, Yongqiang (författare)
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
-
- Xu, Guangquan (författare)
- Big Data School, Qingdao Huanghai University, Qingdao 266427, China; Tianjin Key Laboratory of Advanced Networking (TANK), College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
-
- Zheng, Xi (författare)
- Department of Computing, Macquarie University, Sydney, NSW 2109, Australia
-
- Vasilakos, Athanasios V. (författare)
- Luleå tekniska universitet,Datavetenskap,College of Mathematics and Computer Science, Fuzhou University, Fuzhou 350116, China; School of Electrical and Data Engineering, University of Technology Sydney, Australia
-
visa färre...
-
(creator_code:org_t)
- IEEE, 2022
- 2022
- Engelska.
-
Ingår i: IEEE Journal on Selected Areas in Communications. - : IEEE. - 0733-8716 .- 1558-0008. ; 40:2, s. 611-625
- Relaterad länk:
-
http://arxiv.org/pdf...
-
visa fler...
-
https://urn.kb.se/re...
-
https://doi.org/10.1...
-
visa färre...
Abstract
Ämnesord
Stäng
- To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal layer-wise communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal layer-wise scheduling for all cases compared to competing strategies while the model accuracy remains untouched.
Ämnesord
- TEKNIK OCH TEKNOLOGIER -- Elektroteknik och elektronik -- Datorsystem (hsv//swe)
- ENGINEERING AND TECHNOLOGY -- Electrical Engineering, Electronic Engineering, Information Engineering -- Computer Systems (hsv//eng)
Nyckelord
- Edge computing
- deep learning training
- dynamic scheduling
- convolutional neural network
- Pervasive Mobile Computing
- Distribuerade datorsystem
Publikations- och innehållstyp
- ref (ämneskategori)
- art (ämneskategori)
Hitta via bibliotek
Till lärosätets databas