Color Face Image Generation with Improved Generative Adversarial Networks

Yeong Hwa Chang*, Pei Hua Chung, Yu Hsiang Chai, Hung Wei Lin

*此作品的通信作者

研究成果: 期刊稿件文章同行評審

摘要

This paper focuses on the development of an improved Generative Adversarial Network (GAN) specifically designed for generating color portraits from sketches. The construction of the system involves using a GPU (Graphics Processing Unit) computing host as the primary unit for model training. The tasks that require high-performance calculations are handed over to the GPU host, while the user host only needs to perform simple image processing and use the model trained by the GPU host to generate images. This arrangement reduces the computer specification requirements for the user. This paper will conduct a comparative analysis of various types of generative networks which will serve as a reference point for the development of the proposed Generative Adversarial Network. The application part of the paper focuses on the practical implementation and utilization of the developed Generative Adversarial Network for the generation of multi-skin tone portraits. By constructing a face dataset specifically designed to incorporate information about ethnicity and skin color, this approach can overcome a limitation associated with traditional generation networks, which typically generate only a single skin color.

原文英語
文章編號1205
期刊Electronics (Switzerland)
13
發行號7
DOIs
出版狀態已出版 - 04 2024

文獻附註

Publisher Copyright:
© 2024 by the authors.

指紋

深入研究「Color Face Image Generation with Improved Generative Adversarial Networks」主題。共同形成了獨特的指紋。

引用此