SwePub
Sök i LIBRIS databas

  Extended search

id:"swepub:oai:DiVA.org:kth-332455"
 

Search: id:"swepub:oai:DiVA.org:kth-332455" > Human motion recogn...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist
  • Gao, Robert X.Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH, USA (author)

Human motion recognition and prediction for robot control

  • Article/chapterEnglish2021

Publisher, publication year, extent ...

  • 2021-06-11
  • Cham :Springer Nature,2021
  • printrdacarrier

Numbers

  • LIBRIS-ID:oai:DiVA.org:kth-332455
  • https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-332455URI
  • https://doi.org/10.1007/978-3-030-69178-3_11DOI

Supplementary language notes

  • Language:English
  • Summary in:English

Part of subdatabase

Classification

  • Subject category:vet swepub-contenttype
  • Subject category:kap swepub-publicationtype

Notes

  • Part of ISBN 9783030691783 9783030691776QC 20230724
  • The ever-increasing demand for higher productivity, lower cost and improved safety continues to drive the advancement of manufacturing technologies. As one of the key elements, human-robot collaboration (HRC) envisions a workspace where humans and robots can dynamically collaborate for improved operational efficiency while maintaining safety. As the effectiveness of HRC is affected by a robot's ability to sense, understand and forecast the state of the collaborating human worker, human action recognition and motion trajectory prediction have become a crucial part in realising HRC. In this chapter, deep-learning-based methods for accomplishing this goal, based on the in-situ sensing data from the workspace are presented. Specifically, to account for the variability and heterogeneity of human workers during assembly, a context-aware deep convolutional neural network (DCNN) has been developed to identify the task-associated context for inferencing human actions. To improve the accuracy and reliability of human motion trajectory prediction, a functional unit-incorporated recurrent neural network (RNN) has been developed to parse worker's motion patterns and forecast worker's future motion trajectories. Collectively, these techniques allow the robot to answer the question: "which tool or part should be delivered to which location next?", and enable online robot action planning and execution for the collaborative assembly operation. The methods developed are experimentally evaluated, with the collaborative assembly of an automotive engine as a case study.

Subject headings and genre

Added entries (persons, corporate bodies, meetings, titles ...)

  • Wang, LihuiKTH,Produktionsutveckling(Swepub:kth)u1blju84 (author)
  • Wang, PengDepartment of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, USA (author)
  • Zhang, JianjingDepartment of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH, USA (author)
  • Liu, HongyiKTH,Produktionsutveckling(Swepub:kth)u1r2lwyr (author)
  • Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH, USAProduktionsutveckling (creator_code:org_t)

Related titles

  • In:Advanced Human-Robot Collaboration in ManufacturingCham : Springer Nature, s. 261-282

Internet link

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Gao, Robert X.
Wang, Lihui
Wang, Peng
Zhang, Jianjing
Liu, Hongyi
About the subject
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Electrical Engin ...
and Robotics
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Mechanical Engin ...
and Production Engin ...
Articles in the publication
By the university
Royal Institute of Technology

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view