对抗制
生成语法
计算机科学
面子(社会学概念)
班级(哲学)
人工智能
过程(计算)
质量(理念)
机器学习
光学(聚焦)
面部识别系统
算法
模式识别(心理学)
社会学
社会科学
哲学
物理
认识论
光学
操作系统
作者
Matthew Gusdorff,Alvin Grissom,Jeová Farias Sales Rocha Neto,Yupeng Lin,Ryan Trotter,Ryan Lei
摘要
ABSTRACT Advances in computer science–specifically in the development and use of generative machine learning–have provided powerful new tools for psychologists to create synthetic human faces as stimuli, which ultimately provide high‐quality photorealistic face images that have many advantages, including reducing typical ethical and privacy concerns and generating face images from minoritized communities that are typically underrepresented in existing face databases. However, there are a number of ways that using machine learning‐based face generation and manipulation software can introduce bias into the research process, thus threatening the validity of studies. The present article provides a summary of how one class of recently popular algorithms for generating faces–generative adversarial networks (GANs)—works, how we control GANs, and where biases (with a particular focus on racial biases) emerge throughout these processes. We discuss recommendations for mitigating these biases, as well as how these concepts manifest in similar modern text‐to‐image algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI