Chung-Ang University Researchers Develop a New GAN Model That Stabilizes Training and Performance

Press Releases

Oct 17, 2024

Heralding advancements in AI, the GAN model utilizes kernel functions and histogram transformations to address stability and efficiency issues

SEOUL, South Korea, Oct. 17, 2024 /PRNewswire/ — In recent years, artificial intelligence (AI) and deep learning models have advanced rapidly, becoming easily accessible. This has enabled people, even those without specialized expertise, to perform various tasks with AI. Among these models, generative adversarial networks (GANs) stand out for their outstanding performance in generating new data instances with the same characteristics as the training data, making them particularly effective for generating images, music, and text.

GANs consist of two neural networks: a generator that creates new data distributions starting from random noise, and a discriminator which checks whether the generated data distribution is “real” (matching the training data) or “fake.” As training progresses, the generator gets better at generating realistic distributions, while the discriminator gets better at detecting fakes. Despite improvements in GANs, issues like gradient vanishing, unstable learning, and mode collapse (where the generator produces limited variety) still pose challenges.

Against this backdrop, a team of researchers led by Assistant Professor Minhyeok Lee from the School of Electrical and Electronics Engineering at Chung-Ang University, Republic of Korea developed a novel strategy. “Imagine teaching an artist to paint landscapes. Consistent guidance may lead them to produce similar scenes, a phenomenon called mode collapse in machine learning. To prevent this, our PMF-GAN model refines the discriminator’s capabilities, penalizing the generator for producing overly similar outputs, thereby promoting diversity,” explains Dr. Lee. Their findings were made available online on July 18, 2024, and published in Volume 164 of the journal Applied Soft Computing in October 2024.

The PMF-GAN framework introduces two key enhancements. First, it employs kernel optimization to refine the ability of the discriminator, offering a significant advantage in addressing issues of model collapse and gradient vanishing. Kernels are mathematical functions that transform data into a higher dimensional space, making it easier to detect patterns even in complex data. The output of the discriminator is processed through kernel functions, producing the kernel density estimation (KDE). Second, PMF-GAN applies a mathematical technique called histogram transformation to the KDE output, enabling a more intuitive analysis of the results. During training, the model minimizes the difference between the kernel-histogram transformed fake and real distributions, a measure called PMF distance.

Specially, this approach allows for the use of various mathematical distance functions and kernel functions. This flexibility allows PMF-GAN to be adapted to different data types and learning objectives. Additionally, PMF-GAN can be integrated into existing improved GAN architectures for even better performance. In experiments, PMF-GAN outperformed several baseline models in terms of visual-quality and evaluation metrics across multiple datasets. For the Animal FacesHQ dataset, it showed a 56.9% improvement in the inception score and 61.5% in the fréchet inception distance (FID) score compared to the conventional WGAN-GP model.

The flexibility and performance improvements presented by PMF-GAN opens new possibilities for generating synthetic data in various technological and digital fields. In healthcare, it improves image generation. It also enables more realistic and varied computer-generated visuals for films, video games, and virtual reality experiences,” remarks Dr. Lee. Further, “As AI-generated content becomes more prevalent in our daily lives, our method enhances the quality and diversity of the content, and will ensure that AI continues to be a valuable tool for human creativity and problem-solving,” concludes Dr. Lee.

Reference

Title of original paper: Stabilized GAN models training with kernel-histogram transformation and probability mass function distance

Journal: Applied Soft Computing

DOI: https://doi.org/10.1016/j.asoc.2024.112003

About Chung-Ang University
Website: https://neweng.cau.ac.kr/index.do 

Contact:
Sungki Shin
02-820-6614
384930@email4pr.com 

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/chung-ang-university-researchers-develop-a-new-gan-model-that-stabilizes-training-and-performance-302278719.html

SOURCE Chung-Ang University

YOU MAY ALSO LIKE

GCL Energy Technology and Ant Digital Technologies…

Heralding advancements in AI, the GAN model utilizes kernel functions and histogram transformations to address stability and efficiency issues SEOUL, South Korea, Oct. 17, 2024…

read more

Neptune Anticipates Greater Significance of Generative AI,…

Heralding advancements in AI, the GAN model utilizes kernel functions and histogram transformations to address stability and efficiency issues SEOUL, South Korea, Oct. 17, 2024…

read more

Odine Accelerates R&D and Innovation, Achieving Remarkable…

Heralding advancements in AI, the GAN model utilizes kernel functions and histogram transformations to address stability and efficiency issues SEOUL, South Korea, Oct. 17, 2024…

read more