Jianyuan Min

Motion Graphs++:
A Compact Generative Model for
Semantic Motion Analysis and Synthesis

ACM Transactions on Graphics. To Present at SIGGRAPH ASIA 2012

[download paper] [download video-1] [download video-2]
This paper introduces a new generative statistical model that allows for human motion analysis and synthesis at both semantic and kinematic levels. Our key idea is to decouple complex variations of human movements into finite structural variations and continuous style variations and encode them with a concatenation of morphable functional models. This allows us to model not only a rich repertoire of behaviors but also an infinite number of style variations within the same action. Our models are appealing for motion analysis and synthesis because they are highly structured, contact aware, and semantic embedding. We have constructed a compact generative motion model from a huge and heterogeneous motion database (about two hours mocap data and more than 15 different actions). We have demonstrated the power and effectiveness of our models by exploring a wide variety of applications, ranging from automatic motion segmentation, recognition, and annotation, online/offine motion synthesis at both kinematics and behavior levels to semantic motion editing. We show the superiority of our model by comparing against alternative methods.