CASIA OpenIR  > 类脑智能研究中心
Cross-view Action Recognition via Transductive Transfer Learning
Jie Qin; Zhaoxiang Zhang; Yunhong Wang
Conference NameInternational Conference on Image Processing
Source PublicationICIP 2013
Conference Date15-18 September 2013
Conference PlaceMelbourne, Australia
AbstractHuman action recognition is a hot topic in computer vision field. Various applicable approaches have been proposed to recognize different types of actions. However, the recognition performance deteriorates rapidly when the viewpoint changes. Traditional approaches aim to address the problem by inductive transfer learning, in which target-view samples are manually labeled. In this paper, we present a novel approach for cross-view action recognition based on transductive transfer learning. We address the problem by transferring instances across views. In our settings, both labels of examples from the target view and the corresponding relation between examples from pairwise views are dispensable. Experimental results on the IXMAS multi-view data set demonstrate the effectiveness of our approach, and are comparable to the state of the art.
KeywordTransductive Svm Action Recognition Transfer Learning
Document Type会议论文
Corresponding AuthorZhaoxiang Zhang
Recommended Citation
GB/T 7714
Jie Qin,Zhaoxiang Zhang,Yunhong Wang. Cross-view Action Recognition via Transductive Transfer Learning[C],2013.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Jie Qin]'s Articles
[Zhaoxiang Zhang]'s Articles
[Yunhong Wang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Jie Qin]'s Articles
[Zhaoxiang Zhang]'s Articles
[Yunhong Wang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Jie Qin]'s Articles
[Zhaoxiang Zhang]'s Articles
[Yunhong Wang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.