CASIA OpenIR
FSD-10: A fine-grained classification dataset for figure skating
Liu, Shenglan1,2; Liu, Xiang1; Huang, Gao3; Qiao, Hong4; Hu, Lianyu1; Jiang, Dong1; Zhang, Aibin1; Liu, Yang1; Guo, Ge1
Source PublicationNEUROCOMPUTING
ISSN0925-2312
2020-11-06
Volume413Pages:360-367
Corresponding AuthorLiu, Shenglan(liusl@mail.dlut.edu.cn)
AbstractAction recognition is an important and challenging problem in video analysis. Although the past decade has witnessed progress in action recognition with the development of deep learning, such process has been slow in competitive sports content analysis. To promote the research on action recognition from competitive sports video clips, we introduce a Figure Skating Dataset (FSD-10) for fine-grained sports content analysis. To this end, we collect 1484 clips from the worldwide figure skating championships in 2017-2018, which consist of 10 different actions in men/ladies programs. Each clip is at a rate of 30 frames per second with resolution 1080 x 720, which are annotated by experts. To build a baseline for action recognition in figure skating, we evaluate state-of-the-art action recognition methods on FSD-10. Motivated by the idea that domain knowledge is of great concern in sports field, we propose a key-frame based temporal segment network (KTSN) for classification and achieve remarkable performance. Experimental results demonstrate that FSD-10 is an ideal dataset for benchmarking action recognition algorithms, as it requires to accurately extract action motions rather than action poses. We hope FSD-10, which is designed to have a large collection of finegrained actions, can serve as a new challenge to develop more robust and advanced action recognition models. (C) 2020 Elsevier B.V. All rights reserved.
KeywordAction recognition Figure Skating Dataset Fine-grained sports content analysis Keyframe based temporal segment network
DOI10.1016/j.neucom.2020.06.108
WOS KeywordACTION RECOGNITION
Indexed BySCI
Language英语
Funding ProjectNational Key Research and Development Program of China[2017YFB1300200] ; National Key Research and Development Program of China[2017YFB1300203] ; Fundamental Research Funds for the Central Universities[DUT20RC(5)010]
Funding OrganizationNational Key Research and Development Program of China ; Fundamental Research Funds for the Central Universities
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000579803700030
PublisherELSEVIER
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/42173
Collection中国科学院自动化研究所
Corresponding AuthorLiu, Shenglan
Affiliation1.Dalian Univ Technol, Sch Innovat & Entrepreneurship, Dalian, Liaoning, Peoples R China
2.Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian, Liaoning, Peoples R China
3.Tsinghua Univ, Beijing, Peoples R China
4.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing, Peoples R China
Recommended Citation
GB/T 7714
Liu, Shenglan,Liu, Xiang,Huang, Gao,et al. FSD-10: A fine-grained classification dataset for figure skating[J]. NEUROCOMPUTING,2020,413:360-367.
APA Liu, Shenglan.,Liu, Xiang.,Huang, Gao.,Qiao, Hong.,Hu, Lianyu.,...&Guo, Ge.(2020).FSD-10: A fine-grained classification dataset for figure skating.NEUROCOMPUTING,413,360-367.
MLA Liu, Shenglan,et al."FSD-10: A fine-grained classification dataset for figure skating".NEUROCOMPUTING 413(2020):360-367.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Liu, Shenglan]'s Articles
[Liu, Xiang]'s Articles
[Huang, Gao]'s Articles
Baidu academic
Similar articles in Baidu academic
[Liu, Shenglan]'s Articles
[Liu, Xiang]'s Articles
[Huang, Gao]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Liu, Shenglan]'s Articles
[Liu, Xiang]'s Articles
[Huang, Gao]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.