计算机科学
人工智能
计算机视觉
发电机(电路理论)
图像(数学)
翻译(生物学)
图像编辑
过程(计算)
生成模型
边界(拓扑)
生成语法
数学
数学分析
功率(物理)
生物化学
物理
化学
量子力学
信使核糖核酸
基因
操作系统
作者
Beijia Chen,Hongbo Fu,Kun Zhou,Youyi Zheng
标识
DOI:10.1109/tvcg.2022.3166159
摘要
In this article, we present OrthoAligner, a novel method to predict the visual outcome of orthodontic treatment in a portrait image. Unlike the state-of-the-art method, which relies on a 3D teeth model obtained from dental scanning, our method generates realistic alignment effects in images without requiring additional 3D information as input and thus making our system readily available to average users. The key of our approach is to employ the 3D geometric information encoded in an unsupervised generative model, i.e., StyleGAN in this article. Instead of directly conducting translation in the image space, we embed the teeth region extracted from a given portrait to the latent space of the StyleGAN generator and propose a novel latent editing method to discover a geometrically meaningful editing path that yields the alignment process in the image space. To blend the edited mouth region with the original portrait image, we further introduce a BlendingNet to remove boundary artifacts and correct color inconsistency. We also extend our method to short video clips by propagating the alignment effects across neighboring frames. We evaluate our method in various orthodontic cases, compare it to the state-of-the-art and competitive baselines, and validate the effectiveness of each component.
科研通智能强力驱动
Strongly Powered by AbleSci AI