Thesis of Mehdi-Antoine Mahfoudi


Subject:
Interactive Procedural Generation of Expressive Animations

Summary:

Procedural animation synthesis methods (i.e. which are not based on pre-existing animations) often lack naturalness, but they often have the advantage of being easy to calculate and the parameterizable side allows the animator to express his know-how. Conversely, synthesis methods based on motion capture are able to transcribe the finesse of human movement, but require a heavy human and material investment for the capture phase, as well as for the animation editing and adaptation phase. Indeed, the difference between the captured animation and the desired animation at each moment of a virtual application often requires tedious transformations that are very difficult to automate. In any case, a state-of-the-art analysis shows that the animations produced often lack "life" because the systems are dedicated to the generation of a generic movement and do not provide for style integration. The objective of this thesis is to propose an approach at the frontier between data and procedural automatic generation. To add more expressively, we will seek to derive the necessary information from the observation of human motion (e.g. through motion capture databases, or from the literature on motion analysis) in order to propose procedural approaches to animation synthesis where each parameter will make sense to an animator but can also be automatically adjusted by the algorithm. The proposed approach will take as input the intentions of expression (e.g. sadness, anger, joy, etc.) and style (e.g. hurried, tired, etc.) of a character, and will have to be able to transcribe them by adequately driving any animation system.


Advisor: Saida Bouakaz
Coadvisor: Alexandre Meyer