CASIA OpenIR  > 模式识别实验室
Learning Dense Correspondence for NeRF-Based Face Reenactment
Songlin Yang1,2; Wei Wang2; Yushi Lan3; Xiangyu Fan4; Bo Peng2; Lei Yang4; Jing Dong2
2024
会议名称The 38th AAAI Conference on Artificial Intelligence
会议日期2024年2月20日至27日
会议地点加拿大渥太华
出版地tt
出版者tt
摘要

Face reenactment is challenging due to the need to establish
dense correspondence between various face representations
for motion transfer. Recent studies have utilized Neural Radi-
ance Field (NeRF) as fundamental representation, which fur-
ther enhanced the performance of multi-view face reenact-
ment in photo-realism and 3D consistency. However, estab-
lishing dense correspondence between different face NeRFs
is non-trivial, because implicit representations lack ground-
truth correspondence annotations like mesh-based 3D para-
metric models (e.g., 3DMM) with index-aligned vertexes. Al-
though aligning 3DMM space with NeRF-based face repre-
sentations can realize motion control, it is sub-optimal for
their limited face-only modeling and low identity fidelity.
Therefore, we are inspired to ask: Can we learn the dense
correspondence between different NeRF-based face repre-
sentations without a 3D parametric model prior? To ad-
dress this challenge, we propose a novel framework, which
adopts tri-planes as fundamental NeRF representation and
decomposes face tri-planes into three components: canoni-
cal tri-planes, identity deformations, and motion. In terms
of motion control, our key contribution is proposing a Plane
Dictionary (PlaneDict) module, which efficiently maps the
motion conditions to a linear weighted addition of learnable
orthogonal plane bases. To the best of our knowledge, our
framework is the first method that achieves one-shot multi-
view face reenactment without a 3D parametric model prior.
Extensive experiments demonstrate that we produce better
results in fine-grained motion control and identity preser-
vation than previous methods. Project page (video demo):
https://songlin1998.github.io/planedict/.

关键词tt
学科门类tt
DOItt
收录类别EI
七大方向——子方向分类多模态智能
国重实验室规划方向分类可解释人工智能
是否有论文关联数据集需要存交
引用统计
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/57514
专题模式识别实验室
通讯作者Wei Wang
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, China
2.CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences, China
3.S-Lab, Nanyang Technological University, Singapore
4.SenseTime, China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Songlin Yang,Wei Wang,Yushi Lan,et al. Learning Dense Correspondence for NeRF-Based Face Reenactment[C]. tt:tt,2024.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
AAAI_2024_multi_view(2179KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Songlin Yang]的文章
[Wei Wang]的文章
[Yushi Lan]的文章
百度学术
百度学术中相似的文章
[Songlin Yang]的文章
[Wei Wang]的文章
[Yushi Lan]的文章
必应学术
必应学术中相似的文章
[Songlin Yang]的文章
[Wei Wang]的文章
[Yushi Lan]的文章
相关权益政策
暂无数据
收藏/分享
文件名: AAAI_2024_multi_view_face_reenactment__Camera_Ready_.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。