领域(数学)
计算机科学
功率(物理)
钥匙(锁)
数据科学
人工智能
计算机安全
数学
量子力学
纯数学
物理
作者
Andreea Pocol,Lesley Istead,Sherman Siu,S. Mokhtari,Sara Kodeiri
标识
DOI:10.1007/978-3-031-50072-5_34
摘要
Did you see that crazy photo of Chris Hemsworth wearing a gorgeous, blue ballgown? What about the leaked photo of Bernie Sanders dancing with Sarah Palin? If these don’t sound familiar, it’s because these events never happened–but with text-to-image generators and deepfake AI technologies, it is effortless for anyone to produce such images. Over the last decade, there has been an explosive rise in research papers, as well as tool development and usage, dedicated to deepfakes, text-to-image generation, and image synthesis. These tools provide users with great creative power, but with that power comes “great responsibility;” it is just as easy to produce nefarious and misleading content as it is to produce comedic or artistic content. Therefore, given the recent advances in the field, it is important to assess the impact they may have. In this paper, we conduct meta-research on deepfakes to visualize the evolution of these tools and paper publications. We also identify key authors, research institutions, and papers based on bibliometric data. Finally, we conduct a survey that tests the ability of participants to distinguish photos of real people from fake, AI-generated images of people. Based on our meta-research, survey, and background study, we conclude that humans are falling behind in the race to keep up with AI, and we must be conscious of the societal impact.
科研通智能强力驱动
Strongly Powered by AbleSci AI