计算机科学
培训(气象学)
图像(数学)
人工智能
比例(比率)
计算机视觉
可视化
训练集
计算机图形学(图像)
模式识别(心理学)
地图学
物理
气象学
地理
作者
Changjian Chen,Fei Lv,Yiming Guan,Pengcheng Wang,Sheng‐Jie Yu,Yifan Zhang,Zhuo Tang
标识
DOI:10.1109/tvcg.2025.3567053
摘要
The performance of computer vision models in certain real-world applications (e.g., rare wildlife observation) is limited by the small number of available images. Expanding datasets using pre-trained generative models is an effective way to address this limitation. However, since the automatic generation process is uncontrollable, the generated images are usually limited in diversity, and some of them are undesired. In this paper, we propose a human-guided image generation method for more controllable dataset expansion. We develop a multi-modal projection method with theoretical guarantees to facilitate the exploration of both the original and generated images. Based on the exploration, users refine the prompts and re-generate images for better performance. Since directly refining the prompts is challenging for novice users, we develop a sample-level prompt refinement method to make it easier. With this method, users only need to provide sample-level feedback (e.g., which samples are undesired) to obtain better prompts. The effectiveness of our method is demonstrated through the quantitative evaluation of the multi-modal projection method, improved model performance in the case study for both classification and object detection tasks, and positive feedback from the experts.
科研通智能强力驱动
Strongly Powered by AbleSci AI