Knowledge Commons of Institute of Automation,CAS
End-to-end Video-level Representation Learning for Action Recognition | |
Zhu, Jiagang1,2; Zhu, Zheng1,2; Zou, Wei1 | |
2018-11 | |
会议名称 | 2018 24th International Conference on Pattern Recognition (ICPR) |
会议日期 | August 20-24, 2018 |
会议地点 | Beijing, China |
摘要 | From the frame/clip-level feature learning to the video-level representation building, deep learning methods in action recognition have developed rapidly in recent years. However, current methods suffer from the confusion caused by partial observation training, or without end-to-end learning, or restricted to single temporal scale modeling and so on. In this paper, we build upon two-stream ConvNets and propose Deep networks with Temporal Pyramid Pooling (DTPP), an end-to-end video-level representation learning approach, to address these problems. Specifically, at first, RGB images and optical flow stacks are sparsely sampled across the whole video. Then a temporal pyramid pooling layer is used to aggregate the frame-level features which consist of spatial and temporal cues. Lastly, the trained model has compact video-level representation with multiple temporal scales, which is both global and sequence-aware. Experimental results show that DTPP achieves the state-of-the-art performance on two challenging video action datasets: UCF101 and HMDB51, either by ImageNet pre-training or Kinetics pre-training. |
收录类别 | EI |
语种 | 英语 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/39107 |
专题 | 中科院工业视觉智能装备工程实验室_精密感知与控制 |
通讯作者 | Zou, Wei |
作者单位 | 1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Zhu, Jiagang,Zhu, Zheng,Zou, Wei. End-to-end Video-level Representation Learning for Action Recognition[C],2018. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
End-to-end Video-lev(720KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论