Discriminative 3D Morphable Model Fitting
Zhu XY(朱翔昱); Yan JJ(闫俊杰); Yi D(易东); Lei Z(雷震); Li ZQ(李子青)
2015
会议名称IEEE International Conference on Automatic Face and Gesture Recognition (FG)
会议日期4-8 May, 2015
会议地点Ljubljana, Slovenia
摘要
This paper presents a novel discriminative method for estimating 3D shape from a single image with 3D Morphable Model (3DMM). Until now, most traditional 3DMM fitting methods depend on the analysis-by-synthesis framework which searches for the best parameters by minimizing the difference between the input image and the model appearance. They are highly sensitive to initialization and have to rely on the stochastic optimization to handle local minimum problem, which is usually a time-consuming process. To solve the problem, we find a different direction to estimate shape parameters through learning a regressor instead of minimizing the appearance difference. Compared with the traditional analysis-by-synthesis framework, the new discriminative approach makes it possible to utilize large databases to train a robust fitting model and directly reconstruct shape from image features accurately and efficiently. We compare our method with two popular 3DMM fitting algorithms on FRGC database. Experimental results show that our approach significantly outperforms the state-of-the-art in terms of efficiency, robustness and accuracy.
收录类别EI
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/14783
专题模式识别国家重点实验室_生物识别与安全技术研究
作者单位Center for Biometrics and Security Research, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zhu XY,Yan JJ,Yi D,et al. Discriminative 3D Morphable Model Fitting[C],2015.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
07163096.pdf(7311KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhu XY(朱翔昱)]的文章
[Yan JJ(闫俊杰)]的文章
[Yi D(易东)]的文章
百度学术
百度学术中相似的文章
[Zhu XY(朱翔昱)]的文章
[Yan JJ(闫俊杰)]的文章
[Yi D(易东)]的文章
必应学术
必应学术中相似的文章
[Zhu XY(朱翔昱)]的文章
[Yan JJ(闫俊杰)]的文章
[Yi D(易东)]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 07163096.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。