Learning Depth-aware Heatmaps for 3D Human Pose Estimation in the Wild | |
Chen, Zerui1,3![]() ![]() ![]() | |
2019-08 | |
会议名称 | British Machine Vision Conference |
会议日期 | 2019.9.9-2019.9.12 |
会议地点 | Cardiff, UK |
摘要 | In this paper, we explore to determine 3D human pose directly from monocular image data. While current state-of-the-art approaches employ the volumetric representation to predict per voxel likelihood for each human joint, the network output is memoryintensive, making it hard to function on mobile devices. To reduce the output dimension, we intend to decompose the volumetric representation into 2D depth-aware heatmaps and joint depth estimation. We propose to learn depth-aware 2D heatmaps via associative embeddings to reconstruct the connection between the 2D joint location and its corresponding depth. Our approach achieves a good trade-off between complexity and high performance. We conduct extensive experiments on the popular benchmark Human3.6M and advance the state-of-the-art accuracy for 3D human pose estimation in the wild. |
收录类别 | EI |
七大方向——子方向分类 | 目标检测、跟踪与识别 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/44427 |
专题 | 模式识别实验室 |
通讯作者 | Chen, Zerui |
作者单位 | 1.Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern Recognition (NLPR) 2.University of Chinese Academy of Sciences (UCAS) 3.Chinese Academy of Sciences Artificial Intelligence Research (CAS-AIR) 4.School of Astronautics, Beihang University 5.Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Institute of Automation, Chinese Academy of Sciences (CASIA) |
第一作者单位 | 模式识别国家重点实验室 |
通讯作者单位 | 模式识别国家重点实验室 |
推荐引用方式 GB/T 7714 | Chen, Zerui,Guo, Yiru,Huang, Yan,et al. Learning Depth-aware Heatmaps for 3D Human Pose Estimation in the Wild[C],2019. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Learning Depth-aware(1410KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论