新闻聚合器
点云
计算机科学
利用
聚类分析
地点
数据挖掘
变压器
计算
云计算
特征向量
源代码
模式识别(心理学)
算法
人工智能
语言学
哲学
物理
计算机安全
量子力学
电压
操作系统
作者
Yahui Liu,Bin Tian,Yisheng Lv,Lingxi Li,Fei‐Yue Wang
出处
期刊:Cornell University - arXiv
日期:2023-03-08
被引量:1
标识
DOI:10.48550/arxiv.2303.04599
摘要
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention, but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space (content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an Inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectNN. Source code of this paper is available at https://github.com/yahuiliu99/PointConT.
科研通智能强力驱动
Strongly Powered by AbleSci AI