Realistic facial expression synthesis appears as an important technique in the field of human computer interaction. It is also an active research topic both in computer vision and computer graphics community. The potential application of this technique includes low bit-rate video transmission, computer aided instruction, film design, virtual reality and so on. In this dissertation, we study on facial expression synthesis technique from three aspects: image-based, face modeling-based and video driven based. The main contributions of this thesis are as follows: 1. A facial expression synthesis method based on Active Appearance Model is proposed. Active Appearance Model is used to represent faces as a few of parameters. Through analysis of large samples of facial expressions, we obtain the characteristic of changes in shape and texture during expressions. We adopt neural network to train the mapping functions for facial shape, and use the relationship between expression intensity and facial texture to create a mapping function for facial texture. Finally, facial expressions with different intensity can be synthesized by these mapping functions. 2. A gradient-domain based facial expression synthesis method is proposed. Since the expression ratio image method is sensitive to illumination, we propose to fulfill the facial expression mapping in gradient domain. Wrinkles on source expressive face is transferred to target face by mixing gradients of both images, so this method is robust to illumination. To generate dynamic expressions, an eye processing technique based on texture synthesis is presented and a three-dimensional face model is used to change the head pose. 3. A face modeling and animation technique is proposed. Given a single face image, feature points on the face are labeled automatically, then by minimizing the distance between feature points on the 3D morphable model and face image, an individual face model is generated. In order to create a model including all facial organs, other meshes like eye, tooth and background are added. To animate the generated face model, first, sparse motion data of a source model is transferred to the target model by Radius Basis Function. Then, sphere parameterization is used to calculate the barycentric coordinate for interpolation. In addition, the eye model is processed independently to enhance realism of the animation. 4. A performance-driven facial expression synthesis method is proposed. An improved AAM algor...
修改评论