CASIA OpenIR  > 模式识别国家重点实验室  > 模式分析与学习
Video super-resolution based on spatial-temporal recurrent residual networks
Yang, Wenhan1; Feng, Jiashi2; Xie, Guosen3; Liu, Jiaying1; Guo, Zongming1; Yan, Shuicheng4
AbstractIn this paper, we propose a new video Super-Resolution (SR) method by jointly modeling intra-frame redundancy and inter-frame motion context in a unified deep network. Different from conventional methods, the proposed Spatial-Temporal Recurrent Residual Network (STR-ResNet) investigates both spatial and temporal residues, which are represented by the difference between a high resolution (HR) frame and its corresponding low resolution (LR) frame and the difference between adjacent HR frames, respectively. This spatial-temporal residual learning model is then utilized to connect the intra-frame and inter-frame redundancies within video sequences in a recurrent convolutional network and to predict HR temporal residues in the penultimate layer as guidance to benefit estimating the spatial residue for video SR. Extensive experiments have demonstrated that the proposed STR-ResNet is able to efficiently reconstruct videos with diversified contents and complex motions, which outperforms the existing video SR approaches and offers new state-of-the-art performances on benchmark datasets.
KeywordSpatial Residue Temporal Residue Video Super-resolution Inter-frame Motion Context Intra-frame Redundancy
WOS HeadingsScience & Technology ; Technology
Indexed BySCI
Funding OrganizationNational Natural Science Foundation of China(61772043) ; State Scholarship Fund from the China Scholarship Council
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:000429185700007
Citation statistics
Document Type期刊论文
Affiliation1.Peking Univ, Inst Comp Sci & Technol, Beijing 100871, Peoples R China
2.Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
3.Chinese Acad Sci, Inst Automat, NLPR, Beijing 100190, Peoples R China
4.Qihoo 360 Technol Co Ltd, Artificial Intelligence Inst, Beijing 100015, Peoples R China
Recommended Citation
GB/T 7714
Yang, Wenhan,Feng, Jiashi,Xie, Guosen,et al. Video super-resolution based on spatial-temporal recurrent residual networks[J]. COMPUTER VISION AND IMAGE UNDERSTANDING,2018,168:79-92.
APA Yang, Wenhan,Feng, Jiashi,Xie, Guosen,Liu, Jiaying,Guo, Zongming,&Yan, Shuicheng.(2018).Video super-resolution based on spatial-temporal recurrent residual networks.COMPUTER VISION AND IMAGE UNDERSTANDING,168,79-92.
MLA Yang, Wenhan,et al."Video super-resolution based on spatial-temporal recurrent residual networks".COMPUTER VISION AND IMAGE UNDERSTANDING 168(2018):79-92.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yang, Wenhan]'s Articles
[Feng, Jiashi]'s Articles
[Xie, Guosen]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yang, Wenhan]'s Articles
[Feng, Jiashi]'s Articles
[Xie, Guosen]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yang, Wenhan]'s Articles
[Feng, Jiashi]'s Articles
[Xie, Guosen]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.