CASIA OpenIR  > 数字内容技术与服务研究中心  > 听觉模型与认知计算
Encoder-decoder recurrent network model for interactive character animation generation
Wang, Yumeng1,2; Che, Wujun1; Xu, Bo1
Source PublicationVISUAL COMPUTER
2017-06-01
Volume33Issue:6-8Pages:971-980
SubtypeArticle
AbstractIn this paper, we propose a generative recurrent model for human-character interaction. Our model is an encoder-recurrent-decoder network. The recurrent network is composed by multiple layers of long short-term memory (LSTM) and is incorporated with an encoder network and a decoder network before and after the recurrent network. With the proposed model, the virtual character's animation is generated on the fly while it interacts with the human player. The coming animation of the character is automatically generated based on the history motion data of both itself and its opponent. We evaluated our model based on both public motion capture databases and our own recorded motion data. Experimental results demonstrate that the LSTM layers can help the character learn a long history of human dynamics to animate itself. In addition, the encoder-decoder networks can significantly improve the stability of the generated animation. This method can automatically animate a virtual character responding to a human player.
KeywordHuman-character Interaction Long Short-term Memory Encoder-decoder Character Animation Recurrent Neural Network Motion Capture Data
WOS HeadingsScience & Technology ; Technology
DOI10.1007/s00371-017-1378-5
WOS KeywordSHORT-TERM-MEMORY ; HUMAN MOTION
Indexed BySCI ; ISTP
Language英语
Funding OrganizationNational Natural Science Foundation of China(61471359) ; National Key Technology R&D Program of China(2015BAH53F01)
WOS Research AreaComputer Science
WOS SubjectComputer Science, Software Engineering
WOS IDWOS:000402964800027
Citation statistics
Cited Times:1[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/14513
Collection数字内容技术与服务研究中心_听觉模型与认知计算
Affiliation1.Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
Recommended Citation
GB/T 7714
Wang, Yumeng,Che, Wujun,Xu, Bo. Encoder-decoder recurrent network model for interactive character animation generation[J]. VISUAL COMPUTER,2017,33(6-8):971-980.
APA Wang, Yumeng,Che, Wujun,&Xu, Bo.(2017).Encoder-decoder recurrent network model for interactive character animation generation.VISUAL COMPUTER,33(6-8),971-980.
MLA Wang, Yumeng,et al."Encoder-decoder recurrent network model for interactive character animation generation".VISUAL COMPUTER 33.6-8(2017):971-980.
Files in This Item: Download All
File Name/Size DocType Version Access License
yumeng_cgi17_unprove(1808KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wang, Yumeng]'s Articles
[Che, Wujun]'s Articles
[Xu, Bo]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wang, Yumeng]'s Articles
[Che, Wujun]'s Articles
[Xu, Bo]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wang, Yumeng]'s Articles
[Che, Wujun]'s Articles
[Xu, Bo]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: yumeng_cgi17_unprove.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.