SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:kth-338394"
 

Search: onr:"swepub:oai:DiVA.org:kth-338394" > Skeleton-RGB integr...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Skeleton-RGB integrated highly similar human action prediction in human–robot collaborative assembly

Zhang, Yaqian (author)
Institute of Smart Manufacturing Systems, Chang'an University, Xi'an, China
Ding, Kai (author)
Institute of Smart Manufacturing Systems, Chang'an University, Xi'an, China
Hui, Jizhuang (author)
Institute of Smart Manufacturing Systems, Chang'an University, Xi'an, China
show more...
Liu, Sichao (author)
KTH,Produktionsutveckling
Guo, Wanjin (author)
Institute of Smart Manufacturing Systems, Chang'an University, Xi'an, China
Wang, Lihui (author)
KTH,Produktionsutveckling
show less...
 (creator_code:org_t)
Elsevier BV, 2024
2024
English.
In: Robotics and Computer-Integrated Manufacturing. - : Elsevier BV. - 0736-5845 .- 1879-2537. ; 86
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Human–robot collaborative assembly (HRCA) combines the flexibility and adaptability of humans with the efficiency and reliability of robots during collaborative assembly operations, which facilitates complex product assembly in the mass personalisation paradigm. The cognitive ability of robots to recognise and predict human actions and make responses accordingly is essential but currently still limited, especially when facing highly similar human actions. To improve the cognitive ability of robots in HRCA, firstly, a two-stage skeleton-RGB integrated model focusing on human-parts interaction is proposed to recognise highly similar human actions. Specifically, it consists of a feature guidance module and a feature fusion module, which can balance the accuracy and efficiency of human action recognition. Secondly, an online prediction approach is developed to predict human actions ahead of schedule, which includes a pre-trained skeleton-RGB integrated model and a preprocessing module. Thirdly, considering the positioning accuracy of the parts to be assembled and the continuous update of human actions, a dynamic response scheme of the robot is designed. Finally, the feasibility and effectiveness of the proposed model and approach are verified by a case study of a worm-gear decelerator assembly. The experimental results demonstrate that the proposed model achieves precise human action recognition with a high accuracy of 93.75% and a lower computational cost. Specifically, only 15 frames from a skeleton stream and 5 frames (less than 16 frames in general) from an RGB video stream are adopted. Moreover, it only takes 1.026 s to achieve online human action prediction based on the proposed prediction method. The dynamic response scheme of the robot is also proven to be feasible. It is expected that the efficiency of human–robot interaction in HRCA can be improved from a closed-loop view of perception, prediction, and response.

Subject headings

NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorseende och robotik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Vision and Robotics (hsv//eng)

Keyword

Highly similar human actions
Human–robot collaborative assembly
Interaction efficiency
Online prediction
Robot dynamic response
Skeleton-RGB integration

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view