对抗制
计算机科学
生成语法
生成对抗网络
人工智能
数学优化
机器学习
深度学习
数学
作者
Bahram Farhadinia,Mohammad Reza Ahangari,Aghileh Heydari,Amitava Datta
标识
DOI:10.1016/j.eswa.2024.123413
摘要
Interest in Generative Adversarial Networks (GANs) continues to grow, with diverse GAN variations emerging for applications across various domains. However, substantial challenges persist in advancing GANs. Effective training of deep learning models, including GANs, heavily relies on well-defined loss functions. Specifically, establishing a logical and reciprocal connection between the training image and generator is crucial. In this context, we introduce a novel GAN loss function that employs the Sugeno complement concept to logically link the training image and generator. Our proposed loss function is a composition of logical elements, and we demonstrate through analytical analysis that it outperforms an existing loss function found in the literature. This superiority is further substantiated via comprehensive experiments, showcasing the loss function’s ability to facilitate smooth convergence during training and effectively address mode collapse issues in GANs.
科研通智能强力驱动
Strongly Powered by AbleSci AI