DILF: Differentiable rendering-based multi-view Image–Language Fusion for zero-shot 3D shape understanding

渲染(计算机图形) 计算机科学 可微函数 人工智能 计算机视觉 自然语言处理 数学 数学分析
作者
Xin Ning,Zaiyang Yu,Lusi Li,Weijun Li,Prayag Tiwari
出处
期刊:Information Fusion [Elsevier BV]
卷期号:102: 102033-102033 被引量:36
标识
DOI:10.1016/j.inffus.2023.102033
摘要

Zero-shot 3D shape understanding aims to recognize “unseen” 3D categories that are not present in training data. Recently, Contrastive Language-Image Pre-training (CLIP) has shown promising open-world performance in zero-shot 3D shape understanding tasks by information fusion among language and 3D modality. It first renders 3D objects into multiple 2D image views and then learns to understand the semantic relationships between the textual descriptions and images, enabling the model to generalize to new and unseen categories. However, existing studies in zero-shot 3D shape understanding rely on predefined rendering parameters, resulting in repetitive, redundant, and low-quality views. This limitation hinders the model’s ability to fully comprehend 3D shapes and adversely impacts the text-image fusion in a shared latent space. To this end, we propose a novel approach called Differentiable rendering-based multi-view Image-Language Fusion (DILF) for zero-shot 3D shape understanding. Specifically, DILF leverages large-scale language models (LLMs) to generate textual prompts enriched with 3D semantics and designs a differentiable renderer with learnable rendering parameters to produce representative multi-view images. These rendering parameters can be iteratively updated using a text-image fusion loss, which aids in parameters’ regression, allowing the model to determine the optimal viewpoint positions for each 3D object. Then a group-view mechanism is introduced to model interdependencies across views, enabling efficient information fusion to achieve a more comprehensive 3D shape understanding. Experimental results can demonstrate that DILF outperforms state-of-the-art methods for zero-shot 3D classification while maintaining competitive performance for standard 3D classification. The code is available at https://github.com/yuzaiyang123/DILP.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
看见了紫荆花完成签到 ,获得积分10
刚刚
畅快的半仙完成签到,获得积分10
刚刚
交个朋友完成签到 ,获得积分10
1秒前
cavendipeng发布了新的文献求助10
1秒前
舒适的天玉完成签到,获得积分10
1秒前
柚子青芒完成签到,获得积分10
2秒前
孤海未蓝完成签到,获得积分10
2秒前
CasterL完成签到,获得积分10
2秒前
YY完成签到 ,获得积分10
3秒前
bio-tang完成签到,获得积分10
3秒前
wxt完成签到,获得积分10
4秒前
团团完成签到,获得积分10
4秒前
Remote完成签到,获得积分10
4秒前
mdbbs2021完成签到,获得积分10
5秒前
马可波航完成签到 ,获得积分10
5秒前
yy完成签到,获得积分10
5秒前
shizhiheng完成签到 ,获得积分10
6秒前
个性的夜天完成签到,获得积分10
7秒前
汤圆不圆发布了新的文献求助10
7秒前
Lam完成签到,获得积分10
7秒前
鸭子完成签到,获得积分10
7秒前
genova完成签到,获得积分10
7秒前
冰蓝色的忧伤完成签到,获得积分10
8秒前
8秒前
yulia完成签到 ,获得积分10
8秒前
不二泽完成签到,获得积分10
8秒前
9秒前
xmn完成签到 ,获得积分10
9秒前
psh完成签到,获得积分10
10秒前
小屁孩完成签到,获得积分10
11秒前
12秒前
psh发布了新的文献求助30
12秒前
13秒前
qdong完成签到,获得积分10
14秒前
瑶瑶完成签到,获得积分10
15秒前
15秒前
zhanjl13完成签到,获得积分10
16秒前
sarah完成签到,获得积分10
16秒前
牛奶煮萝莉完成签到 ,获得积分10
16秒前
小石头完成签到,获得积分10
17秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
Quality by Design - An Indispensable Approach to Accelerate Biopharmaceutical Product Development 800
Signals, Systems, and Signal Processing 610
Research Methods for Applied Linguistics 500
A Social and Cultural History of the Hellenistic World 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6394888
求助须知:如何正确求助?哪些是违规求助? 8209931
关于积分的说明 17384554
捐赠科研通 5448150
什么是DOI,文献DOI怎么找? 2880080
邀请新用户注册赠送积分活动 1856586
关于科研通互助平台的介绍 1699279