Realistic 3D face modeling and expression synthesis are challenging topics in the research field of both computer vision and computer graphics, which have been extensively researched for their application prospect in the industry. Among all those types of face synthesis researches, single image-based face synthesis is the most interesting as well as the most challenging one. On the basis of the research on the background of face synthesis, this paper focused on the face modeling and facial expression synthesis from a single image. Meanwhile, a new approach for 3D facial expression synthesis was proposed in this paper. A single-image-based face modeling procedure was implemented in this paper. On the basis of the single image input, we employed the ASM to capture the feature of the face. The result of ASM and a reference model were adopted to form a shape modeling process which was implemented by scattered data interpolation using RBF. The face modeling was finalized with texture mapping. The main contribution of this paper is a novel approach for data-driven facial expression synthesis, which was build on the basis of the face modeling and a finite 3D facial expression set. The approach is constructed under an motion model framework. In this framework a facial expression was synthesized with the corresponding motion features and a neutral face. A local transform procedure was proposed to fit the motion model to the shape of the input face. After that, we constructed the linear expression motion models which enable the system to synthesize the realistic facial expressions in real time. In addition, a novel clustering-based approach for facial region segmentation was proposed. Unlike other manual ones, this approach run automatically using the motion feature of the face among the expressions. With the segmentation, the system can generate many expressions which are not existed in the expression set.
修改评论