Real-time human segmentation by BowtieNet and a SLAM-based human AR system
Zhao,Xiaomei1,2; Tang,Fulin1,2; Wu,Yihong1,2
发表期刊Virtual Reality & Intelligent Hardware
2019
卷号1期号:5页码:511—524
摘要

Background Generally, it is difficult to obtain accurate pose and depth for a non-rigid moving object from a single RGB camera to create augmented reality (AR). In this study, we build an augmented reality system from a single RGB camera for a non-rigid moving human by accurately computing pose and depth, for which two key tasks are segmentation and monocular Simultaneous Localization and Mapping (SLAM). Most existing monocular SLAM systems are designed for static scenes, while in this AR system, the human body is always moving and non-rigid. Methods In order to make the SLAM system suitable for a moving human, we first segment the rigid part of the human in each frame. A segmented moving body part can be regarded as a static object, and the relative motions between each moving body part and the camera can be considered the motion of the camera. Typical SLAM systems designed for static scenes can then be applied. In the segmentation step of this AR system, we first employ the proposed BowtieNet, which adds the atrous spatial pyramid pooling (ASPP) of DeepLab between the encoder and decoder of SegNet to segment the human in the original frame, and then we use color information to extract the face from the segmented human area. Results Based on the human segmentation results and a monocular SLAM, this system can change the video background and add a virtual object to humans. Conclusions The experiments on the human image segmentation datasets show that BowtieNet obtains state-of-the-art human image segmentation performance and enough speed for real-time segmentation. The experiments on videos show that the proposed AR system can robustly add a virtual object to humans and can accurately change the video background.

关键词Augmented Reality Moving Object Human Segmentation Reconstruction And Tracking Camera Pose Human segmentation
DOI10.1016/j.vrih.2019.08.002
收录类别其他
语种英语
七大方向——子方向分类三维视觉
引用统计
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/38544
专题多模态人工智能系统全国重点实验室_机器人视觉
通讯作者Wu,Yihong
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2.University of Chinese Academy of Sciences, Beijing 100049, China
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Zhao,Xiaomei,Tang,Fulin,Wu,Yihong. Real-time human segmentation by BowtieNet and a SLAM-based human AR system[J]. Virtual Reality & Intelligent Hardware,2019,1(5):511—524.
APA Zhao,Xiaomei,Tang,Fulin,&Wu,Yihong.(2019).Real-time human segmentation by BowtieNet and a SLAM-based human AR system.Virtual Reality & Intelligent Hardware,1(5),511—524.
MLA Zhao,Xiaomei,et al."Real-time human segmentation by BowtieNet and a SLAM-based human AR system".Virtual Reality & Intelligent Hardware 1.5(2019):511—524.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
VRIH期刊-2019-zhao-全文.(16815KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhao,Xiaomei]的文章
[Tang,Fulin]的文章
[Wu,Yihong]的文章
百度学术
百度学术中相似的文章
[Zhao,Xiaomei]的文章
[Tang,Fulin]的文章
[Wu,Yihong]的文章
必应学术
必应学术中相似的文章
[Zhao,Xiaomei]的文章
[Tang,Fulin]的文章
[Wu,Yihong]的文章
相关权益政策
暂无数据
收藏/分享
文件名: VRIH期刊-2019-zhao-全文.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。