SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:hh-54282"
 

Search: onr:"swepub:oai:DiVA.org:hh-54282" > ICGNet :

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

ICGNet : An intensity-controllable generation network based on covering learning for face attribute synthesis

Ning, Xin (author)
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; Center of Materials Science and Optoelectronics Engineering School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing, China
He, Feng (author)
University of Science and Technology of China, Hefei, China; Department of computer science, Yangtze University, Jingzhou, China
Dong, Xiaoli (author)
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
show more...
Li, Weijun (author)
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; Center of Materials Science and Optoelectronics Engineering School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing, China
Alenezi, Fayadh (author)
Department of Electrical Engineering, Faculty of Engineering, Jouf University, Sakakah, Saudi Arabia
Tiwari, Prayag, 1991- (author)
Högskolan i Halmstad,Akademin för informationsteknologi
show less...
 (creator_code:org_t)
New York : Elsevier, 2024
2024
English.
In: Information Sciences. - New York : Elsevier. - 0020-0255 .- 1872-6291. ; 660
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Face-attribute synthesis is a typical application of neural network technology. However, most current methods suffer from the problem of uncontrollable attribute intensity. In this study, we proposed a novel intensity-controllable generation network (ICGNet) based on covering learning for face attribute synthesis. Specifically, it includes an encoder module based on the principle of homology continuity between homologous samples to map different facial images onto the face feature space, which constructs sufficient and effective representation vectors by extracting the input information from different condition spaces. It then models the relationships between attribute instances and representational vectors in space to ensure accurate synthesis of the target attribute and complete preservation of the irrelevant region. Finally, the progressive changes in the facial attributes by applying different intensity constraints to the representation vectors. ICGNet achieves intensity-controllable face editing compared to other methods by extracting sufficient and effective representation features, exploring and transferring attribute relationships, and maintaining identity information. The source code is available at https://github.com/kllaodong/-ICGNet.•We designed a new encoder module to map face images of different condition spaces into face feature space to obtain sufficient and effective face feature representation.•Based on feature extraction, we proposed a novel Intensity-Controllable Generation Network (ICGNet), which can realize face attribute synthesis with continuous intensity control while maintaining identity and semantic information.•The quantitative and qualitative results showed that the performance of ICGNet is superior to current advanced models.© 2024 Elsevier Inc.

Subject headings

NATURVETENSKAP  -- Data- och informationsvetenskap -- Datavetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Sciences (hsv//eng)

Keyword

Face attribute synthesis
Controllable intensity
Covering learning
Generative adversarial network
Image processing

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view