CASIA OpenIR  > 毕业生  > 博士学位论文
智能交互型虚拟角色运动合成方法研究
王雨萌
学位类型工学博士
导师徐波
2017-05
学位授予单位中国科学院研究生院
学位授予地点北京
关键词角色动画 运动合成 交互式动画 递归神经网络 深度强化学习
摘要
近年来,随着虚拟现实技术以及硬件设备的快速发展,越来越多的用户能够接触到虚拟现实技术所带来的高沉浸感的交互式体验,而这种交互式体验又离不开大量的虚拟角色动画。传统的角色动画生成方法需要大量人工参与制作,但在实时互动的过程中,由于用户行为的多样性以及不可预测性,人为合成动画是不可行的,因而需要交互式生成适应性的动画。另一方面,虚拟角色行为的真实性和类人性,也能够提高用户的交互体验感。
本文以智能交互型虚拟角色运动合成方法为研究对象,包括虚拟角色与环境交互的运动合成方法,以及与真人玩家互动的运动合成方法,本文主要的创新点如下:
1、提出了一种基于数据重用的运动重定向方法。
运动重定向是将运动从一个虚拟角色映射给另一个虚拟角色。该方法基于不同的角色骨架层级结构,对运动捕捉数据调整处理,映射给不同的角色,从而达到多个虚拟角色间的运动共享。在角色与虚拟环境交互的过程中,数据库中的运动捕捉数据往往无法适应多样的环境,尤其是当虚拟角色需要与环境中的物体发生紧密交互的时候,因而我们对虚拟角色进行运动校准,从滑步约束、碰撞约束等多个方面对角色姿态进行约束。使得虚拟角色运动能够适应虚拟环境,并产生真实有效的运动动画,为真人玩家通过运动捕捉设备控制不同骨架结构的虚拟角色提供可能。
2、提出了一种基于低维运动控制器的运动合成方法。
在运动合成的过程中,由于模型结构是高维的链状结构,往往需要高维的控制器来控制角色动画。本方法借鉴并扩展了参数化运动图模型的运动合成方法,使用有限的运动片段合成新的运动。在不同的运动片段做帧间相似度评估,以选取过渡点。根据对运动相似度的判别,分别使用运动融合和运动插值的方法生成过渡动画。对于运动插值方法,本文还提出了一种新的层级化可变时长动画插值方法。真人玩家只需要用低维的控制器,即可控制角色的运动变化。该运动合成方法能够实时响应用户的操作并合成角色动画。
3、提出了一种基于递归神经网络的互动动画在线生成方法。
虚拟角色的互动动画自动生成,指的是两个虚拟角色在肢体互动中,其中一方能够根据另一方的动作,自动地生成相应的动作,并与之互动。本文将递归神经网络引入互动动画生成领域,提出了一种生成式的递归网络模型。该模型是一种带编解码的递归神经网络,使互动动画能够端到端的自动生成。通过该网络,虚拟角色能够在交互的过程中在线的生成互动动画。虚拟角色通过编码层提取数据特征,通过解码层合成最终运动,使得生成动画无需后处理的约束校正。此外,该方法能够从虚拟角色之间的互动,扩展到虚拟角色与真人玩家的互动。
4、提出了一种基于深度强化学习的适应地形的运动规划方法。
运动规划是指虚拟角色在虚拟环境运动位移中,选取运动路线以及合成相应的动作的过程。我们将深度强化学习的方法引入了基于地形因素的运动规划问题上。深度强化学习方法能够帮助虚拟角色学习策略,从而基于地形自主地进行运动规划。本文改进了强化学习方法在运动合成上的应用,提出了多决策- 评估模型,比传统的决策评估模型在位移距离上有明显的性能提升。另外该多决策-评估模型还解决了传统的Q-学习方法只能提供离散的策略选择,提高了生成动画的连续性和多样性,并对多种地形有更好的运动适应性。
其他摘要
In recent years, with the rapid development of virtual reality technology and hardware equipment, more and more users can access to the interactive experience with high immersion brought about by virtual reality technology. The traditional generation method of character animation requires a lot of artificial participation in the production. However, in the process of real-time interaction, artificial synthetic animation is impossible due to the unpredictability of user behaviors and motion diversity, therefore, animation generation during interaction is needed. On the other hand, the authenticity and human-likeness of virtual character can also increase the user's interactive experience.
 
This paper is based on the motion synthesis method of intelligent interactive virtual characters, including the motion synthesis method of virtual characters in the process of interaction with environment, as well as the motion synthesis method in the process of interaction with human players. The main innovations in this paper are as follows:
 
1. The paper proposed a motion retargeting method for motion data reuse.
 
Motion retargeting is to map the movement from one character to another, which is a method for improving reusability of motion capture data. Based on different hierarchical skeleton structures, the adjusted motion capture data is mapped to different characters, so as to achieve motion data sharing across multiple virtual characters. In the process of character interaction with virtual environment, motion capture data in the dataset often fails to adapt to diverse environments, especially the rich-contact problems. Thus, we should make motion calibration for virtual characters and add motion constrains to the character posture through balance control, foot-skating constraints, collision detection and other aspects. With the proposed method, the virtual character could adapt to the virtual environment with a human-like and effective character animation. In addition, this method make it possible for human players to control virtual characters with different skeletal structures via motion capture equipment.
 
2. The paper proposed a motion synthesis method based on a low-dimensional motion controller.
 
In the process of synthesizing virtual character animation which adapts to the virtual environment, it often requires high dimensional controllers to control the character animation. This method learned and extended the motion synthesis method of parametric motion graph model with limited motion clips to synthesize new motions. The transition point is selected by calculating the inter-frame similarity between different motion clips. An automatic variable-timing method based on hierarchical interpolation is proposed for animation transition generation. The human players only need to use a low-dimensional controller to control the character's movement. In addition, the motion synthesis method can provide real-time response to user's actions and compose character animation.
 
3. The paper proposed an online generation method of interactive animation based on the recurrent neural network.
 
The automatic generation of interactive animation of a virtual character is that one character could automatically generates corresponding animation and interacts with the other character according to the other character's animation. This paper introduced the recurrent neural network (RNN) into the field of interactive animation generation, and proposed a generative recurrent model, which is a recurrent neural network with encoder and decoder, enabling interactive animation to be generated automatically from end to end. Through this network, the virtual character can generate animation online in the interactive process; the virtual characters can extract motion data features by encoder layers and synthesize final motion by decoder layers, making the generated animation without post-processing. In addition, this method can expand from the interaction between virtual characters to the interaction between virtual character and human player.
 
4. The paper proposed a motion planning method of terrain adaption based on deep reinforcement learning.
 
Motion planning refers to how to select a motion path and how to synthesize corresponding animations for virtual character's locomotion. We introduced the deep reinforcement learning method into the motion planning problem based on topographic factors. Deep reinforcement learning method can help the virtual character to learn to make motion planning autonomously based on the terrain. This paper improved the application of reinforcement learning in motion synthesis, and proposed the multi-actor-critic (MAC) model, with much more obvious performance enhancement than traditional actor-critic method in locomotion distance. Besides, the MAC model solved the problem that traditional Q-learning method could only provide discrete policy choices so as to improve the diversity of generation animation. This proposed method also possessed better movement adaption on various terrain.
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/14644
专题毕业生_博士学位论文
作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
王雨萌. 智能交互型虚拟角色运动合成方法研究[D]. 北京. 中国科学院研究生院,2017.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
毕业论文-最终提交版.pdf(22448KB)学位论文 暂不开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[王雨萌]的文章
百度学术
百度学术中相似的文章
[王雨萌]的文章
必应学术
必应学术中相似的文章
[王雨萌]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。