Semantically Modeling of Object and Context for Categorization
Zhang, Chunjie1,2; Cheng, Jian2,3,4; Tian, Qi5
发表期刊IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
ISSN2162-237X
2019-04-01
卷号30期号:4页码:1013-1024
通讯作者Zhang, Chunjie(chunjie.zhang@ia.ac.cn)
摘要Object-centric-based categorization methods have been proven more effective than hard partitions of images (e.g., spatial pyramid matching). However, how to determine the locations of objects is still an open problem. Besides, modeling of context areas is often mixed with the background. Moreover, the semantic information is often ignored by these methods that only use visual representations for classification. In this paper, we propose an object categorization method by semantically modeling the object and context information (SOC). We first select a number of candidate regions with high confidence scores and semantically represent these regions by measuring correlations of each region with prelearned classifiers (e.g., local feature-based classifiers and deep convolutional-neural-network-based classifiers). These regions are clustered for object selections. The other selected areas are then viewed as context areas. We treat other areas beyond the object and context areas within one image as the background. The visually and semantically represented objects and contexts are then used along with the background area for object representations and categorizations. Experimental results on several public data sets well demonstrate the effectiveness of the proposed object categorization method by semantically modeling the object and context information.
关键词Context modeling object categorization object modeling semantic representation
DOI10.1109/TNNLS.2018.2856096
关键词[WOS]IMAGE CLASSIFICATION ; LOW-RANK ; REPRESENTATION
收录类别SCI
语种英语
资助项目Scientific Research Key Program of Beijing Municipal Commission of Education[KZ201610005012] ; National Science Foundation of China[61429201] ; ARO[W911NF-15-1-0290] ; NEC Laboratories of America ; Blippar ; Scientific Research Key Program of Beijing Municipal Commission of Education[KZ201610005012] ; National Science Foundation of China[61429201] ; ARO[W911NF-15-1-0290] ; NEC Laboratories of America ; Blippar
项目资助者Scientific Research Key Program of Beijing Municipal Commission of Education ; National Science Foundation of China ; ARO ; NEC Laboratories of America ; Blippar
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:000461854100004
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类机器学习
引用统计
被引频次:9[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/28060
专题复杂系统认知与决策实验室_高效智能计算与学习
通讯作者Zhang, Chunjie
作者单位1.Chinese Acad Sci, Res Ctr Brain Inspired Intelligence, Inst Automat, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
4.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
5.Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX 78249 USA
推荐引用方式
GB/T 7714
Zhang, Chunjie,Cheng, Jian,Tian, Qi. Semantically Modeling of Object and Context for Categorization[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2019,30(4):1013-1024.
APA Zhang, Chunjie,Cheng, Jian,&Tian, Qi.(2019).Semantically Modeling of Object and Context for Categorization.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,30(4),1013-1024.
MLA Zhang, Chunjie,et al."Semantically Modeling of Object and Context for Categorization".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 30.4(2019):1013-1024.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhang, Chunjie]的文章
[Cheng, Jian]的文章
[Tian, Qi]的文章
百度学术
百度学术中相似的文章
[Zhang, Chunjie]的文章
[Cheng, Jian]的文章
[Tian, Qi]的文章
必应学术
必应学术中相似的文章
[Zhang, Chunjie]的文章
[Cheng, Jian]的文章
[Tian, Qi]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。