Localizing Discriminative Visual Landmarks for Place Recognition
Xin, Zhe1,2; Cai, Yinghao1; Lu, Tao1; Xing, Xiaoxia1,2; Cai, Shaojun3; Zhang, Jixiang1; Yang, Yiping1; Wang, Yanqing1
2019
会议名称2019 International Conference on Robotics and Automation (ICRA)
会议日期May 20-24, 2019
会议地点Palais des congres de Montreal, Montreal, Canada
摘要

We address the problem of visual place recognition with perceptual changes. The fundamental problem of visual place recognition is generating robust image representations which are not only insensitive to environmental changes but also distinguishable to different places. Taking advantage of the feature extraction ability of Convolutional Neural Networks (CNNs), we further investigate how to localize discriminative visual landmarks that positively contribute to the similarity measurement, such as buildings and vegetations. In particular, a Landmark Localization Network (LLN) is designed to indicate which regions of an image are used for discrimination. Detailed experiments are conducted on open source datasets with varied appearance and viewpoint changes. The proposed approach achieves superior performance against state-of-the-art methods.

收录类别EI
语种英语
七大方向——子方向分类目标检测、跟踪与识别
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/39239
专题综合信息系统研究中心_视知觉融合及其应用
通讯作者Zhang, Jixiang
作者单位1.Institute of Automation, Chinese Academy of Sciences, Beijing, China
2.University of Chinese Academy of Sciences, Beijing, China
3.UISEE Technologies Beijing Co., Ltd
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Xin, Zhe,Cai, Yinghao,Lu, Tao,et al. Localizing Discriminative Visual Landmarks for Place Recognition[C],2019.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
08794383.pdf(274KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xin, Zhe]的文章
[Cai, Yinghao]的文章
[Lu, Tao]的文章
百度学术
百度学术中相似的文章
[Xin, Zhe]的文章
[Cai, Yinghao]的文章
[Lu, Tao]的文章
必应学术
必应学术中相似的文章
[Xin, Zhe]的文章
[Cai, Yinghao]的文章
[Lu, Tao]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 08794383.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。