CASIA OpenIR  > 类脑智能研究中心
Multi-view Moving Objects Classification via Transfer Learning
Jianyun Liu; Yunhong Wang; Zhaoxiang Zhang; Yi Mo
2011-11-28
会议名称1st Asian Conference on Pattern Recognition
会议录名称ACPR 2011
会议日期28th November 2011
会议地点Beijing, China
摘要Moving objects classification in traffic scene videos is a hot topic in recent years. It has significant meaning to intelligent traffic system by classifying moving traffic objects into pedestrians, motor vehicles, non-motor vehicles etc.. Traditional machine learning approaches make the assumption that source scene objects and target scene objects share same distributions, which does not hold for most occasions. Under this circumstance, large amount of manual labeling for target scene data is needed, which is time and labor consuming. In this paper, we introduce TrAdaBoost, a transfer learning algorithm, to bridge the gap between source and target scene. During training procedure, TrAdaBoost makes full use of the source scene data that is most similar to the target scene data so that only small number of labeled target scene data could help improve the performance significantly. The features used for classification are Histogram of Oriented Gradient features of the appearance based instances. The experiment results show the outstanding performance of the transfer learning method comparing with traditional machine learning algorithm.
关键词Feature Extraction Training Image Edge Detection Training Data Accuracy Videos Support Vector Machines
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/13275
专题类脑智能研究中心
通讯作者Zhaoxiang Zhang
推荐引用方式
GB/T 7714
Jianyun Liu,Yunhong Wang,Zhaoxiang Zhang,et al. Multi-view Moving Objects Classification via Transfer Learning[C],2011.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Jianyun Liu]的文章
[Yunhong Wang]的文章
[Zhaoxiang Zhang]的文章
百度学术
百度学术中相似的文章
[Jianyun Liu]的文章
[Yunhong Wang]的文章
[Zhaoxiang Zhang]的文章
必应学术
必应学术中相似的文章
[Jianyun Liu]的文章
[Yunhong Wang]的文章
[Zhaoxiang Zhang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。