Knowledge Commons of Institute of Automation,CAS
Relation-Shape Convolutional Neural Network for Point Cloud Analysis | |
Liu, Yongcheng1,2; Fan, Bin1; Xiang, Shiming1; Pan, Chunhong1 | |
2019 | |
会议名称 | IEEE Conference on Computer Vision and Pattern Recognition |
会议日期 | 2019-6-16 |
会议地点 | Long Beach, CA, USA |
摘要 | Point cloud analysis is very challenging, as the shape implied in irregular points is difficult to capture. In this paper, we propose RS-CNN, namely, Relation-Shape Convolutional Neural Network, which extends regular grid CNN to irregular configuration for point cloud analysis. The key to RS-CNN is learning from relation, i.e., the geometric topology constraint among points. Specifically, the convolutional weight for local point set is forced to learn a high-level relation expression from predefined geometric priors, between a sampled point from this point set and the others. In this way, an inductive local representation with explicit reasoning about the spatial layout of points can be obtained, which leads to much shape awareness and robustness. With this convolution as a basic operator, RS-CNN, a hierarchical architecture can be developed to achieve contextual shape-aware learning for point cloud analysis. Extensive experiments on challenging benchmarks across three tasks verify RS-CNN achieves the state of the arts. |
收录类别 | EI |
语种 | 英语 |
七大方向——子方向分类 | 三维视觉 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/38549 |
专题 | 多模态人工智能系统全国重点实验室_先进时空数据分析与学习 |
通讯作者 | Fan, Bin |
作者单位 | 1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2.School of Artificial Intelligence, University of Chinese Academy of Sciences |
第一作者单位 | 模式识别国家重点实验室 |
通讯作者单位 | 模式识别国家重点实验室 |
推荐引用方式 GB/T 7714 | Liu, Yongcheng,Fan, Bin,Xiang, Shiming,et al. Relation-Shape Convolutional Neural Network for Point Cloud Analysis[C],2019. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
2019CVPR-IEEE-Relati(1559KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论