计算机科学
面子(社会学概念)
简单
人工智能
暴露
图像编辑
视频编辑
不信任
质量(理念)
计算机视觉
人机交互
图像(数学)
社会学
法学
天文
哲学
物理
认识论
社会科学
政治学
作者
Falko Matern,Christian Rieß,Marc Stamminger
标识
DOI:10.1109/wacvw.2019.00020
摘要
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
科研通智能强力驱动
Strongly Powered by AbleSci AI