计算机科学
人工智能
发电机(电路理论)
图像编辑
图像(数学)
语义学(计算机科学)
生成语法
模式识别(心理学)
计算机视觉
功率(物理)
物理
量子力学
程序设计语言
作者
Ruiguo Yu,Ping Sun,Xuewei Li,Ruixuan Zhang,Zhiqiang Liu,Jie Gao
标识
DOI:10.1007/978-981-99-4749-2_2
摘要
Deep learning-based image generation methods can alleviate the class imbalance in training ultrasound image classification models. However, one of the problems faced by the existing image generation methods on ultrasound images is the unreasonable semantics of generated images due to the lack of corresponding constraints on the generator. Recently, image attribute editing methods have gradually matured, aiming to manipulate images with desired semantic rationality attributes while preserving other details. Therefore, this paper proposes to accomplish the generation task by attribute editing to constrain the rational anatomical structure of the generated images. Nevertheless, due to a small discrepancy in lesion-healthy tissue distribution in the ultrasound image, the current attribute editing models prematurely judge the original attribute that has been manipulated to the target attribute, so the target image usually contains some original attribute features. Therefore, a Prior-Guided Generative Adversarial Net (PGedGAN) based on the image attribute editing technology to guide complete attributes manipulation is proposed in this paper. The prior includes two parts: 1) Location Prior that constrains the position of the attribute editing by dividing the foreground and background, and 2) Content Prior that constrains the complete manipulation of the original attribute by minimizing the smoothness of the target attribute region and the space distance between semantic features of the target sample and the original image simultaneously. The experiments prove the effectiveness of our method on downstream classification tasks and image attribute editing that includes image quality, attribute editing rationality, and attribute manipulation completeness.
科研通智能强力驱动
Strongly Powered by AbleSci AI