SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:hh-48299"
 

Search: onr:"swepub:oai:DiVA.org:hh-48299" > One-shot many-to-ma...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

One-shot many-to-many facial reenactment using Bi-Layer Graph Convolutional Networks

Saeed, Uzair (author)
Beijing Institute of Technology, Beijing, China
Armghan, Ammar (author)
College of Engineering, Jouf University, Sakaka, Saudi Arabia
Quanyu, Wang (author)
Beijing Institute of Technology, Beijing, China
show more...
Alenezi, Fayadh (author)
College of Engineering, Jouf University, Sakaka, Saudi Arabia
Yue, Sun (author)
Beijing Institute of Technology, Beijing, China
Tiwari, Prayag, 1991- (author)
Högskolan i Halmstad,Akademin för informationsteknologi
show less...
 (creator_code:org_t)
Oxford : Elsevier, 2022
2022
English.
In: Neural Networks. - Oxford : Elsevier. - 0893-6080 .- 1879-2782. ; 156, s. 193-204
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Facial reenactment is aimed at animating a source face image into a new place using a driving facial picture. In a few shot scenarios, the present strategies are designed with one or more identities or identity-sustained suffering protection challenges. These current solutions are either developed with one or more identities in mind, or face identity protection issues in one or more shot situations. Multiple pictures from the same entity have been used in previous research to model facial reenactment. In contrast, this paper presents a novel model of one-shot many-to-many facial reenactments that uses only one facial image of a face. The proposed model produces a face that represents the objective representation of the same source identity. The proposed technique can simulate motion from a single image by decomposing an object into two layers. Using bi-layer with Convolutional Neural Network (CNN), we named our model Bi-Layer Graph Convolutional Layers (BGCLN) which utilized to create the latent vector’s optical flow representation. This yields the precise structure and shape of the optical stream. Comprehensive studies suggest that our technique can produce high-quality results and outperform most recent techniques in both qualitative and quantitative data comparisons. Our proposed system can perform facial reenactment at 15 fps, which is approximately real time. Our code is publicly available at https://github.com/usaeed786/BGCLN

Subject headings

TEKNIK OCH TEKNOLOGIER  -- Elektroteknik och elektronik -- Datorsystem (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Electrical Engineering, Electronic Engineering, Information Engineering -- Computer Systems (hsv//eng)

Keyword

Facial reenactment
CNN
BGCLN
Information driven care
Informationsdriven vård

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Saeed, Uzair
Armghan, Ammar
Quanyu, Wang
Alenezi, Fayadh
Yue, Sun
Tiwari, Prayag, ...
About the subject
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Electrical Engin ...
and Computer Systems
Articles in the publication
Neural Networks
By the university
Halmstad University

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view