<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Document xmlns:gate="http://www.gate.ac.uk" name="A20_M13_Physically-Valid_Statistical_Models_for_Human_Motion_Generation_CITATION_PURPOSE_M_v1.xml">


  
    181cf968bd65a1734132d5df87565b60dfef6cd03a32fc7f0fdba0b5ac8d6d1a
    3vtw
    http://dx.doi.org/10.1145/1966394.1966398
  
  
    
      
        <Title>Physically-Valid Statistical Models for Human Motion Generation</Title>
      
      Xiaolin Wei Jianyuan Min Texas A&amp;M University
      
        
        Figure 1: Combining statistical motion priors and physical constraints for human motion generation: (a) walking with a heavy shoe; (b) resistance running; (c) stylized walking; (d) running→walking→jumping.
      
      <Abstract>This paper shows how statistical motion priors can be combined seamlessly with physical constraints for human motion modeling and generation. The key idea of the approach is to learn a nonlinear probabilistic force field function from prerecorded motion data with Gaussian processes and combine it with physical constraints in a probabilistic framework. In addition, we show how to effectively utilize the new model to generate a wide range of natural looking motions that achieve the goals specified by the users. Unlike previous statistical motion models, our model can generate physically realistic animations that react to external forces or changes in physical quantities of human bodies and interaction environments. We have evaluated the performance of our system by comparing against ground truth motion data and alternative methods.</Abstract>
	  CR Categories:  I.3.6 [Computer Graphics]: Methodology and Techniques—interaction techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—animation Keywords: Human motion analysis and generation, data-driven animation, physics-based animation, animation from constraints, statistical motion modeling, optimization
	  
        
          Jinxiang Chai
        
      
    
    
      
        <H1>1 Introduction</H1>
      
      A central goal in human motion modeling and generation is to construct a generative motion model to predict how humans move. The problem has attracted the attention of a large number of researchers because of both its theoretical and applied consequences. A generative motion model, for instance, can be used to generate realistic movement for animated human characters or constrain the solution space for modeling 3D human motion in monocular video streams. Decades of research in computer animation have explored two distinctive approaches for human motion modeling: statistical motion
      modeling and physics-based motion modeling. Despite the efforts, accurate modeling of human motion remains a challenging task. Statistical motion models are often represented as a set of mathematical equations or functions that describe human motion using a finite number of parameters and their associated probability distributions. Statistical models are desirable for human motion representation because they can model any human movement as long as relevant motion data are available. A fundamental limitation is that they do not consider the dynamics that cause the motion. Therefore, they fail to predict human motion that reacts to external forces or changes in the physical quantities of human bodies and in the interaction environments. Moreover, when motion data are generalized to achieve new goals, the results are often physically implausible and thereby display noticeable visual artifacts such as unbalanced motions, foot sliding, and motion jerkiness. Physics-based motion models could overcome the aforementioned limitations by applying physics to modeling human movements. However, physical laws alone are often insufficient to generate natural human movement because a motion can be physically correct without appearing natural. One way to address the problem is to define a global performance criterion based on either the smoothness of the movement or the minimization of needed controls or control rates (e.g., minimal muscle usage). These heuristics show promise for highly dynamic motions, but it remains challenging to model low-energy motion or highly stylized human actions. In addition, it is unclear if a single global performance objective such as minimal torque is appropriate to model heterogeneous human actions such as running→walking→jumping. In this paper, we show how statistical modeling techniques can be combined with physics-based modeling techniques to address the limitations of both techniques. Physical motion models and statistical motion models are complementary to each other as they capture different aspects of human movements. On the one hand, physical models can utilize statistical priors to constrain the motion to lie in the space of natural appearance and more significantly, learn an appropriate performance criterion to model natural-looking human actions. On the other hand, statistical motion models can rely on physical constraints to generate physically correct human motion that reacts to external forces, satisfies friction limit constraints, and respects physical quantities of human bodies or interaction environments. By accounting for physical constraints and statistical priors simultaneously, we not only instill physical realism into statistical motion models but also extend physics-based modeling to a wide variety of human actions such as stylized walking. The key idea of our motion modeling process is to learn nonlinear probabilistic force field functions from prerecorded motion data with Gaussian Process (GP) models and combine them with physical constraints in a probabilistic framework. In our formulation, a force field function u = g(q, q)  ̇ maps kinematic states (joint poses q and joint velocities q)  ̇ to generalized forces (u). We demonstrate the power and effectiveness of our motion model in constraint-based motion generation. We show that we can create a natural-looking animation that reacts to changes in physical parameters such as masses or inertias of human bodies and friction properties of environments ( Figure 1(a) ) or external forces such as resistance forces ( Figure 1(b) ). In addition, we show that a single physically valid statistical model is sufficient to create physically realistic animation for a wide range of style variations within a particular human action such as “sneaky” walking (Figure 1(c)) or transitions between heterogeneous human actions such as running→walking→jumping ( Figure 1(d) ). We evaluate the performance of our model by comparing with ground truth data as well as alternative techniques.
      
        <H1>2 Background</H1>
        We introduce a physically valid statistical motion model that combines physical laws and statistical motion priors and use it to create physically realistic animation that achieves the goals specified by the user. Therefore, we will focus our discussion on statistical motion modeling and physics-based motion modeling as well as their applications in constraint-based motion synthesis. Statistical models are desirable for human motion modeling and synthesis because they are often compact and can be used to generate human motions that are not in prerecorded motion data. 

Thus far, a wide variety of statistical motion models have been developed; their applications include inverse kinematics 
[Grochow et al. 2004; Chai and Hodgins 2005], human motion synthesis and editing 
[Li et al. 2002; Chai and Hodgins 2007; Lau et al. 2009; Min et al. 2009], human motion style interpolation and transfer 
[Brand and Hertzmann 2000; Ikemoto et al. 2009; Min et al. 2010], and so forth. Nonetheless, the motions generated by statistical motion models are often physically invalid because existing statistical motion models do not consider the forces that cause the motion. Another limitation is that they do not react to perturbations (e.g., external forces) or changes in physical quantities such as masses and inertias of human bodies. Physics-based motion models could overcome the limitations of statistical motion models by applying physics to modeling human movement. However, physics-based motion modeling is a mathematically ill-posed problem because there are many ways to adjust a motion so that physical laws are satisfied, and yet only a subset of motions are natural-looking. One way to address this limitation is by adopting the “minimal principle” strategy, which was first introduced to the graphics community by 
Witkin and Kass [1988]. They postulated that an individual would determine a movement in such a way as to reduce the total muscular effort to a minimum, subject to certain constraints. Therefore, a major challenge in physics-based motion modeling is how to define an appropriate performance criterion for the “minimal principle.” Decades of research in computer animation (e.g., 
[Witkin and Kass 1988; Cohen 1992; Liu et al. 1994; Fang and Pollard 2003]
) introduced numerous performance criteria for human motion modeling, e.g., minimal energy, minimal torque, minimal jerk, minimal joint momentum, minimal joint acceleration, or minimal torque change. These heuristics show promise for highly dynamic motions, but it remains very difficult to model low-energy motions and highly stylized human movements. 


A number of researchers have recently explored the potential of using prerecorded motion data to improve physics-based optimization methods, including editing motion data with the help of simplified physical models 
[Popović and Witkin 1999], initializing optimization with reference motion data 
[Sulejmanpasic and Popović 2005], learning parameters of motion styles from prerecorded motion data 
[Liu et al. 2005], and reducing the search space for physicsbased optimization 
[Safonova et al. 2004; Ye and Liu 2008]. Similar to these methods, our system utilizes both motion data and physics for human motion analysis and generation, but there are two important distinctions. First, we rely on statistical motion models rather than a predefined global performance objective (e.g., minimal muscle usage) to reduce the ambiguity of physics-based modeling. This enables us to extend physics-based modeling to stylistic human motions such as “sneaky walking”. Another attraction of our model is that it learns the mapping from the kinematic states to generalized forces using Gaussian process models. Unlike reference trajectories or linear subspace models adopted in previous work, GP models are capable of modeling both stylistic variations within a particular human action and heterogeneous human behaviors. Our research draws inspiration from the large body of literature on developing control strategies for physics-based simulation. In particular, our nonlinear probabilistic force field functions are conceptually similar to control strategies used for physics-based simulation because both representations aim to map kinematic states to driving forces. 
Thus far, researchers in physics-based simulation have explored two approaches for control design, including manually designed control strategies (e.g. 
[Hodgins et al. 1995]) and tracking a reference trajectory while maintaining balance 
[Zordan and Hodgins 2002; Sok et al. 2007; Yin et al. 2007; da Silva et al. 2008; Muico et al. 2009]. However, our approach is different in that we automatically learn nonlinear probabilistic mapping functions from large sets of motion data. In addition, our goal is different because we aim to generate a desired animation that matches user constraints. Physics-based simulation approaches are not appropriate for our task because forward simulation techniques often do not provide accurate control over simulated motions. Our approach uses Gaussian process to model a nonlinear probabilistic function that maps from kinematic states to generalized forces. GP and its invariants (e.g., GPLVM) have recently been applied to modeling kinematic motion for many problems in computer animation, including nonlinear dimensionality reduction for human poses [Grochow et al. 2004], motion interpolation [Mukai and Kuriyama 2005], motion editing [Ikemoto et al. 2009], and motion synthesis [Ye and Liu 2010]. In particular, Ikemoto and her colleagues [2009]
 learned the kinematic mapping from pose information of the source motion to pose and acceleration information of the target motion and applied them to transferring a new source motion into a target motion. Ye and Liu [2010]
 used GPLVM to construct a second-order dynamic model for human kinematic data and used them to synthesize kinematic walking motion after a perturbation. Our approach is different in that we focus on modeling the relationship between kinematic data and generalized forces rather than kinematic motion data itself.
      
      
        <H1>3 Overview</H1>
        We construct a physically valid statistical model that leverages both physical constraints and statistical motion priors and utilize it to generate physically realistic human motion that achieves the goals specified by the user. Physics-based dynamics modeling. Our motion model considers both Newtonian dynamics and contact mechanics for a full-body human figure. Therefore, we describe the Newtonian dynamics equations for full-body movement and Coulomb’s friction model for computing the forces caused by the friction between the character and the interaction environment (Section 4). Force field function modeling. We automatically extract force field priors from prerecorded motion data (Section 5). Our force field priors are represented by a nonlinear probabilistic function u = g(q, q)  ̇ that maps the kinematic states (q, q)  ̇ to the generalized forces u. To achieve this goal, we precompute the generalized forces u from prerecorded kinematic motion data and apply Gaussian process to modeling the force field priors embedded in training data. Motion modeling and synthesis. We show how to combine force field priors with physics-based dynamics models seamlessly in a probabilistic framework and how to use the new motion model to generate physically realistic animation that matches user-defined constraints (Section 6). We formulate the constraint-based motion synthesis problem in a Maximum A Posteriori (MAP) framework and introduce an efficient gradient-based optimization algorithm to find an optimal solution.
        
          
          Figure 2: Motion data preprocessing for joint pose data (q), joint velocity data ( q)  ̇ and generalized force data (u). (top) before the preprocessing; (bottom) after the preprocessing.
        
      
      
        <H1>4 Physics-based Dynamics Models</H1>
        Our dynamics models approximate human motion with a set of rigid body segments. We describe a full-body character pose with a set of independent joint coordinates q ∈ R 48 , including absolute root position and orientation, and the relative joint angles of 18 joints. These joints are the head, thorax, upper neck, lower neck, upper back, lower back, left and right humerus, radius, wrist, femur, tibia, and metatarsal. Newtonian dynamics. The Newtonian dynamics equations for full-body movement can be described using the following equation 
[Jazar 2007]:
        
          1
          M (q)q + C(q, q)  ̇ + h(q) = τ + f c + f e ≡ u
        
        where q, q,  ̇ and q represent the joint angle poses, joint velocities, and joint accelerations, respectively. The quantities M (q), C(q, q)  ̇ and h(q) are the joint space inertia matrix, centrifugal/Coriolis and gravitational forces, respectively. The vectors τ , f c , and f e represent joint torques, contact forces, and external forces, respectively. The vector u represent the generalized forces, which can be either calculated from kinematic data or resultant forces of join torques, contact forces, and external forces. Human muscles generate torques about each joint, leaving global position and orientation of the body as unactuated joint coordinates. As a result, the movement of the global position and orientation is completely determined by contact forces f c and external forces f e . Contact mechanics. During ground contact, the feet can only push but not pull on the ground. To keep the body balanced, contact forces should not require an unreasonable amount of friction and the center of pressure must fall within the support polygon of the feet. We use Coulomb’s friction model to compute the forces caused by the friction between the character and the environment. A friction cone is defined to be the range of possible forces satisfying Coulomb’s function model for an object at rest. We ensure the contact forces stay within a basis that approximates the cones with nonnegative basis coefficients. We model the contact between two surfaces with multiple contact points m = 1, ..., M . As a result, we can represent the contact forces f c as a function of the joint angle poses and nonnegative basis coefficients 
[Pollard and Reitsma 2001; Liu et al. 2005]: M
        
          2
          f c (q, λ) = J m (q) T B m e λ m m=1
        
        where the matrix B m is a 3 × 4 matrix consisting of 4 basis vectors that approximately span the friction cone for the m-th contact force. The 4 × 1 vector e λ m represents nonnegative basis weights for the m-th contact force. The contact force Jacobian J m (q) maps the instantaneous generalized joint velocities to the instantaneous world space cartesian velocities at the m-th contact point under the joint pose q. Note that we remove the nonnegative coefficients constraints by representing the basis weights with exponential functions. Enforcing Newtonian dynamics equations and friction limit constraints would allow us to generate physically correct motion that satisfies friction limit constraints. However, physical constraints alone are insufficient to model natural-looking human movement because a motion can be physically correct without appearing natural. In the next section, we discuss how to learn force field functions from prerecorded motion data to constrain the human motion to lie in the space of natural appearance.
      
      
        <H1>5 Force Field Function Modeling</H1>
        Our system automatically extracts force field priors embedded in prerecorded motion data. Our idea of force field modeling is moti-
        
          
        
        (a)
        (b)
        
          Figure 3: Modeling human motion with force fields: (a) training data: red dots and red lines represent kinematic states [q, q]  ̇ and generalized forces u in the two-dimensional eigenspace, respectively; (b) motion generalization: black dots and black lines represent a motion instance generated by the learned force field model; (c) the generated 3D animation.
        
        vated by recent findings in neuroscience [D’Avella et al. 2006; Bizzi et al. 2008]
, which reveal that the complex spatiotemporal characteristics of the muscle patterns for particular actions can be modeled by a weighted combination of a small number of force fields. We generalize this concept by learning a nonlinear probabilistic force field u = g(q, q),  ̇ which maps kinematic states (q, q)  ̇ to generalized forces u. Given an initial kinematic state (q 1 , q  ̇ 1 ) of a human figure, a force field can predict how humans move by sequentially advancing a Newtonian dynamics model over time.
        
          <H2>5.1 Motion Data Preprocessing</H2>
          Constructing force field priors from motion capture data, however, is difficult because current motion capture technologies cannot directly measure generalized forces. Our solution is to compute generalized forces from prerecorded kinematic poses using the following Newtonian dynamics equation:
          
            3
            u = M (q)q + C(q, q)  ̇ + h(q)
          
          where the vector q represents prerecorded joint poses. The joint velocities q  ̇ are computed as a backward difference between current and previous frames. The joint accelerations q are computed as a central difference between previous frames, current frames, and next frames. We have observed that the generalized forces computed from kinematic motion data are often very noisy because they are related to second derivatives of kinematic poses (see Figure 2 ). We thus preprocess generalized force data as well as joint poses and velocities using physics-based trajectory optimization techniques. Our approach follows the spacetime formulation in computer graphics literature 
[Witkin and Kass 1988; Cohen 1992]. Briefly, we minimize the deviation from prerecorded kinematic motion data as well as the sum of the squared torques. This optimization is subject to foot-ground contact constraints, friction limit constraints, and the discretization of physics constraints determined by a finite difference scheme.  Figure 2 shows the joint poses, joint velocities, and generalized forces before and after the preprocessing step. After motion data preprocessing, we have training data sets consisting of kinematic motion data [q n , q  ̇ n ], n = 1, ..., N and their corresponding generalized force data u n , n = 1, ..., N . Our next task is to learn force field priors from the training data sets.
          (c)
        
        
          <H2>5.2 GP Modeling of Force Fields  ̇</H2>
          A force field is a nonlinear probabilistic function u = g(q, q) that maps the kinematic state (q, q)  ̇ to the generalized forces u. We propose to use Gaussian process model to construct a force field from the training data sets. We choose GP model because it can efficiently model nonlinear property of the force fields and its learning process involves very few manual tuning parameters. More specifically, our GP model learns a nonlinear probabilistic function that predicts the generalized forces based on the joint pose and joint velocity (for details, see Appendix):
          
            4
            pr(u|q, q)  ̇ = N (μ(q, q),  ̇ Σ(q, q))  ̇
          
          where both means and covariance matrices are functions of kinematic states [q, q].  ̇ In our implementation, we represent the root translations in the ground plane and the rotations about the up axis at the current frame with respect to the root coordinate system at the previous frame in order to eliminate the effect of absolute positions in the ground plane and the rotations about the up axis. In practice, human motion is highly coordinated, the number of dimensions of joint poses, joint velocities, or generalized forces is often much lower than the number of dimensions of the character’ poses. We, therefore, apply Principal Component Analysis techniques to reducing the dimensionality of both kinematic data [q n , q  ̇ n ] and generalized force data u n and employ Gaussian process to model the force fields in reduced subspaces. We automatically determine the dimensions of subspaces by keeping 95% of the original energy. Subspace learning not only reduces the memory space for GP modeling but also significantly speeds up the learning and evaluation process of GP models. Figure 3(a) visualizes the force fields computed from a prerecorded walking database, which includes a wide variety of walking variations such as step sizes, turning angles, walking speeds, and walking slopes. To simplify the visualization, we only show the top two eigen-vectors for the kinematic states (q, q)  ̇ as well as the generalized forces u. Given an initial state (q 1 , q  ̇ 1 ), the learned force field priors pr(u|q, q)  ̇ can produce a physically realistic motion sequence by sequentially advancing a Newtonian dynamics model over time ( Figure 3(b) and Figure 3(c) ).
        
      
      
        <H1>6 Human Motion Modeling and Synthesis</H1>
        We now discuss how to combine force field priors with physicsbased dynamics models in a probabilistic framework and how to apply the proposed framework to generating physically realistic human motion that achieves the goals specified by the user.
        
          <H2>6.1 Combining Physics with Statistical Priors pr ( c | x ) pr ( x ) arg max x pr(x|c) = arg max x pr ( c ) (8) ∝ arg max x pr(c|x)pr(x)</H2>
          We introduce a probabilistic motion model to model how humans move. Let pr(x) represent a probabilistic model of human motion x = {(q t , q  ̇ t , u t )|t = 1, ..., T }, where q t , q  ̇ t , and u t are joint poses, joint velocities, and generalized forces at frame t, respectively. According to Bayes’ rule, we can decompose the probabilistic motion model pr(x) into the following three terms: pr(x) = pr(q 1 , q  ̇ 1 ) · pr(u t |q t , q  ̇ t ) · pr(q t+1 , q  ̇ t+1 |q t , q  ̇ t , u t ) t pr init pr f orcef ield pr physics (5) where the first term pr init represents the probabilistic density function of the initial kinematic pose and velocity. In our experiment, we model the initial kinematic priors pr init with Gaussian mixture models. The second term pr f orcef ield represents the force field priors described in Equation (4). The third term pr physics measures how well the generated motion satisfies the physical constraints. In order to evaluate the third term pr physics , we first use backward difference to compute joint velocities and use central difference to compute joint accelerations. Based on the dynamics equation defined in Equation (1), the joint pose, joint velocities and generalized forces in the current step should completely determine the joint accelerations in the current step. Therefore, the joint pose and velocity in the next frame are also fully determined due to finite difference approximation.
          Mathematically, we have
          
            6
            pr physics = pr(q t+1 , q  ̇ t+1 |q t , q  ̇ t , u t ) ∝ pr(q t |q t , q  ̇ t , u t )
          
          In practice, as noted by other researchers [Sok et al. 2007; Muico et al. 2009], dynamics models adopted in physics-based modeling are often inconsistent with observed data because of simplified dynamics/contact models, discretization of physics constraints, and approximate modeling of physical quantities of human bodies such as masses and inertias. Accordingly, dynamics equations are often not satisfied precisely. In our formulation, we assume Newtonian dynamics equations are disturbed by Gaussian noise of a standard deviation of σ physics : pr physics ∝ pr(q t |q t , q  ̇ t , u t ) ∝ exp − M ( q t ) q  ̈ t +C( q t , q  ̇ 2σ t )+h( 2 q t )−τ t − f c ( q t ,λ t )− f e 2 physics (7) where the standard deviation σ physics shows our confidence of physics-based dynamics models. If the standard deviation is small, then the Gaussian probability distribution has a narrow peak, indicating high confidence in the physical constraints; similarly, a large standard deviation indicates low confidence. Such a motion model would allow us to generate an infinite number of physically realistic motion instances. In particular, we can sample the initial prior distribution pr init to obtain an initial state for joint poses and velocities and sequentially predict joint torques using the force field priors pr f orcef ield to advance the Newtonian dynamics model pr physics over time. More importantly, we can employ the motion model pr(x) to generate physically realistic animation x that best matches the user’s input c.
        
        
          <H2>6.2 Constraint-based Motion Synthesis</H2>
          We formulate the constraint-based motion synthesis problem in a maximum a posteriori (MAP) framework by estimating the most likely motion x from the user’s input c:
          (8)
          In our implementation, we minimize the negative logarithm of the posteriori probability density function pr(x|c), yielding the following energy minimization problem:
          
            9
            arg min x − ln pr(c|x) + − ln pr(x) ,
          
          
            9
            E c E prior
          
          where the first term E c is the likelihood function measuring how well the generated motion x matches the input constraints c. Similar to [Chai and Hodgins 2007]
, the system allows the user to specify various forms of kinematic constraints throughout the motion or at isolated points in the motion. Typically, the user can define a sparse set of key frames as well as contact constraints to generate a desired animation. The user could also specify a small number of key trajectories to control fine details of a particular human action such as stylized walking. The second term E prior is the prior distribution function defined by our physically valid statistical model in Equation (5). The motion synthesis problem can now be solved by nonlinear optimization methods. Given a sparse set of constraints c, the optimization computes joint poses, joint torques, and contact forces by minimizing the following objective function:
          
            10
            argmin { q t ,τ t ,λ t } ω 1 E c + ω 2 E init + ω 3 E f orcef ield +ω 4 E physics
          
          where E init , E f orcef ield , and E physics are the negative log of pr init , pr f orcef ield , and pr physics , respectively. In our experiment, we set the weights for E c , E init , E f orcef ield and E physics to 1000, 1, 1 and 100, respectively 1 . We choose a very large weight for the constraint term because we want to ensure the generated motion can match user constraints accurately. The weight for the physical term is much larger than the statistical prior term because physical correctness has a higher priority than statistical consistency in our system. Thus far, we have not discussed how to incorporate the learned force field priors into the motion optimization framework. Note that in the force field modeling step, we performed dimensionality reduction analysis on both kinematic data and generalized joint torques and learned the force field priors in reduced subspaces. One possible solution to incorporating the force field priors is to perform the optimization in the reduced subspaces. We have implemented this idea and found that performing the optimization in the subspaces can hurt the generalization ability of our model and often cannot match user-specified constraints accurately. To avoid this issue, we choose to perform the optimization in the original configuration space while imposing “soft” subspace constraints on both kinematic states and generalized forces. Let B u and B s denote the subspace matrices for generalized forces u and kinematic states s = [q T , q  ̇ T ] T , respectively. We reformulate
          1 Note that the weight for the physics term (ω 4 ) corresponds to 2σ 2 1 in Equation 7 physics
          
            
              
                
                  
                     Motion examples
                     Total frames
                     Durations
                     Total key frames
                     Total key trajectories
                     Initialization times
                     Synthesis times
                  
                
                
                  
                     Normal walking
                     270
                     9s
                     2
                     0
                     9 sec
                     17 min
                  
                  
                     Big-step walking
                     272
                     9s
                     2
                     0
                     7 sec
                     16 min
                  
                  
                     Walking and turning
                     392
                     13s
                     2
                     0
                     10 sec
                     20 min
                  
                  
                     Running
                     130
                     4.3s
                     2
                     0
                     5 sec
                     10 min
                  
                  
                     Jumping
                     168
                     5.6s
                     3
                     0
                     3 sec
                     7 min
                  
                  
                     Heavy foot
                     235
                     7.8s
                     2
                     0
                     10 sec
                     22 min
                  
                  
                     Resistance running
                     148
                     4.9s
                     2
                     0
                     5 sec
                     13 min
                  
                  
                     Slippery surfaces
                     193
                     6.4s
                     2
                     0
                     8 sec
                     20 min
                  
                  
                     Moon walking
                     193
                     6.4s
                     2
                     0
                     8 sec
                     21 min
                  
                  
                     Sneaky walking
                     674
                     22.5s
                     2
                     3
                     20 sec
                     23 min
                  
                  
                     Proud walking
                     302
                     10.1s
                     2
                     2
                     12 sec
                     16 min
                  
                  
                     Long walking sequence
                     1357
                     45.2s
                     8
                     0
                     27 sec
                     51 min
                  
                  
                     Run→walk→jump
                     510
                     17s
                     6
                     2
                     13 sec
                     21 min
                  
                
              
            
            Motion examples Total frames Durations Total key frames Total key trajectories Initialization times Synthesis times Normal walking 270 9s 2 0 9 sec 17 min Big-step walking 272 9s 2 0 7 sec 16 min Walking and turning 392 13s 2 0 10 sec 20 min Running 130 4.3s 2 0 5 sec 10 min Jumping 168 5.6s 3 0 3 sec 7 min Heavy foot 235 7.8s 2 0 10 sec 22 min Resistance running 148 4.9s 2 0 5 sec 13 min Slippery surfaces 193 6.4s 2 0 8 sec 20 min Moon walking 193 6.4s 2 0 8 sec 21 min Sneaky walking 674 22.5s 2 3 20 sec 23 min Proud walking 302 10.1s 2 2 12 sec 16 min Long walking sequence 1357 45.2s 8 0 27 sec 51 min Run→walk→jump 510 17s 6 2 13 sec 21 min
            Table 1: Details of all the animations generated by our synthesis algorithm.
          
          
            
              
                
                  
                     Databases
                     size
                     Durations
                     Prep.
                     GP learning
                  
                
                
                  
                     walking
                     5227
                     2.9 min
                     65 min
                     40 min
                  
                  
                     stylized walking
                     7840
                     4.4 min
                     138 min
                     46 min
                  
                  
                     locomotion
                     4571
                     2.5 min
                     55 min
                     47 min
                  
                
              
            
            Databases size Durations Prep. GP learning walking 5227 2.9 min 65 min 40 min stylized walking 7840 4.4 min 138 min 46 min locomotion 4571 2.5 min 55 min 47 min
            Table 2: Details of three training data sets and the computational times spent on data preprocessing (Section 5.1) and GP learning (Section 5.2).
          
          the force field priors as follows:
          − ln pr(B u T u|B s T s) + α 1 u − B u B u T u 2 + α 2 s − B s B s T s 2 E f orcef ield (11) where the first term represents the force field priors in reduced subspaces. The second and third terms impose the “soft” subspace constraints for kinematic states and generalized forces, penalizing them as they deviate from the subspace representations. In our experiment, we set the weights α 1 and α 2 to 10 and 10, respectively. The combined motion models are desirable for human motion generation because they measure both statistical consistency and physical correctness of the motion. With the physical term, our model can react to changes in physical parameters. For example, when a character is pushed by an external force, e.g., elastic forces in resistance running, the external force in the physics term E physics (see Equation 7) will force the system to modify kinematic motion and joint torques as well as contact forces in order to satisfy Newtonian dynamics and contact mechanics. However, without force field priors, the modified motion could be unnatural because there are many ways to adjust a motion so that physical laws are satisfied, and yet only a subset of motions are natural-looking. With force field priors, our system pushes the modified motions towards regions of high probability density in order to be consistent with force field priors.
        
      
      
        <H1>7 Implementation Details</H1>
        Here we briefly discuss implementation details of our system: Data preprocessing. We used three different motion databases in our experiments, including walking (5227 frames), stylized walking (7840 frames), and locomotion databases (4571 frames). We preprocessed the prerecorded motion data using spacetime optimization (Section 5.1). The computational time for each data set was reported in Table 2 . GP learning. To speed up the learning and evaluation process of GP models, we applied PCA to reduce the dimensionality of training data and learned the GP model in a reduced subspace. We automatically determined the dimension of the subspace by preserving 95% of the original energy. The dimensions of the kinematic states ([q t , q  ̇ t ]) in three databases were 19, 22, and 19 respectively. The dimensions of the generalized forces (u) were 8, 10, and 7 respectively. We adopted sparse approximation strategies for Gaussian process modeling 
[Quinonero-Candela and Rasmussen 2005]. The GP learning times spent on the three training databases were 65 minutes, 138 minutes, and 55 minutes, respectively. Motion optimization. We follow a standard approach of representing q t and τ t using cubic B-splines. We solved the optimization problem using sequential quadratic programming (SQP) 
[Bazaraa et al. 1993]
, where each iteration solves a quadratic programming subproblem. We implemented the system with C++/Matlab and conducted the optimization with the Matlab optimization toolbox. Each optimization often took from ten to thirty minutes to converge without code optimization (for details, see Table 1 ). All the experiments were run on a 2.5GHz dual core computer with 3GB of RAM. Initialization. The performance of our optimization algorithm highly depends on the initialization of the optimization. To obtain a good initial guess for joint poses q t , t = 1, ..., T , we dropped off the physical term E physics in the objective function and used the remaining objective functions to optimize the joint poses across the entire motion sequence. We evaluated the force field term with respect to joint poses because we can calculate current generalized forces using current joint poses, velocities and accelerations as shown in Equation (3). With the initialized joint poses q t 0 , t = 1, ..., T , we dropped off the constraint term as well as the initial prior term, and optimized the joint torques τ as well as contact forces λ using the force field prior term and the physics term. In this step, we evaluated the force field priors in terms of joint torques and contact forces: E f orcef ield (τ, λ) = − ln pr(τ + f c (q 0 , λ) + f e )|q 0 , q  ̇ 0 ). Each initialization step often took from less than thirty seconds to converge (for details, see Table 1 ).
      
      
        <H1>8 Experiments</H1>
        This section demonstrated the benefits of combining physical constraints and statistical motion priors for human motion generation. In addition, we evaluated the performance of our algorithm by comparing with ground truth data and results obtained by alternative methods. The details for our experiments are summarized in Table 1. For each example in our experiments, we reported the total number of animation frames, the types and number of animation constraints, and the computational times spend on the initialization and motion synthesis step.
        
          
          Figure 4: Generating physically realistic motion that reacts to changes in physical quantities of human bodies: walking with a heavy left foot.
        
        
          <H2>8.1 The Benefits of Physical Constraints</H2>
          The incorporation of physics into probabilistic motion models significantly improves the generalizability of statistical motion models. This experiment shows that the system can generate physically realistic motion that reacts to changes in physical quantities of human bodies and interaction environments, a capability that has not been demonstrated in previous statistical motion models. Heavy foot. Our system can react to changes in physical quantities such as masses and inertias of human bodies. For example, we changed the mass of the character by simulating a character wearing a 2.5 kilogram shoe. The accompanying video shows that the simulated character maintained balance by adapting the gait and leaning the body to the right side in order to offset the additional weigh caused by the left shoe. Figure 4 shows sample frames for walking with a heavy foot. Resistance running. In this example, the user specified the start and end poses as well as foot contacts to create an animation for resistance running ( Figure 1(b) ). The resistance forces were determined by Hooke’s law of elasticity, ranging from zero to 450N. We observed that the character moved the upper body forward in order to offset the effect of resistance force. Walking on slippery surfaces. We can generate an animation that reacts to changes in friction properties of environments. In the accompanying video, we show a simulated character walking on a slippery surface by reducing the friction coefficient to 0.05. Moon walking. We can edit an animation by changing the gravity of interaction environments. For example, we generated “moon” walking by setting gravity at 1.62 m/s 2 .
        
        
          <H2>8.2 The Benefits of Statistical Motion Priors</H2>
          This experiment shows that we can extend physics-based modeling techniques to stylized walking, detailed walking variations, and heterogeneous human actions with the help of statistical motion priors. Such actions are often difficult or even impossible to generate with previous physics-based modeling techniques. Stylized walking. Our approach can generate physically-realistic animation for highly stylized human actions. The training data sets for stylized walking included normal walking and ten distinct walking styles. The system constructed a single motion model from the training data sets and used it to generate various forms of stylized walking such as “sneaky” walking and “proud” walking (Figure 1(c)). In addition to keyframes and foot contact constraints, the user specified a sparse number of key trajectories in order to control the fine details of stylized walking. Walking variations. We tested the effectiveness of our algorithm for modeling a wide range of walking variations. We learned a single generative model from a “walking” database and used it to generate a long walking sequence. The synthesized motion displayed a wide variety of walking variations such as walking along a straight line, walking with a sharp turn, walking with a big step, walking on a slope, climbing over an obstacle, and transitionings between different walking examples ( Figure 5 ). Because of memory restrictions, we synthesized the whole motion sequence by sequentially computing each example from sparse constraints and stitching them into a long motion sequence. For each example, the user specified the start and end poses of the generated motion as well as foot contact constraints throughout the whole motion sequence. Heterogeneous actions. We tested the effectiveness of the physically valid statistical model on heterogeneous human actions. We learned a single generative model from a locomotion database and used it to create a long animation sequence consisting of walking, running, jumping, and stopping, as well as their transitions (Figure 1(d)).
        
        
          <H2>8.3 Evaluation and Comparisons</H2>
          We assessed the quality of the generated motions by comparing with ground truth data. We also evaluated the importance of force field priors and physical constraints for human motion generation. Comparison against ground truth data. We evaluated the performance of our algorithm via cross validation techniques. More specifically, we pulled out a testing sequence in the training data, used it to extract the start and end poses and foot contact constraints, and applied the synthesis algorithm to generate motion that matches the “simulated” constraints. The accompanying video shows a side-by-side comparison between the ground truth motion and the synthesized motion. We have observed that the generated motions achieve similar quality to the ground truth motion data. The importance of force field priors. This comparison shows the importance of force field priors for human motion generation. We compared our system with standard physics-based optimization techniques 
[Witkin and Kass 1988] by dropping off both force field priors term E f orcef ield and initialization term E init in the objective function defined in Equation (10). For a fair comparison, we added the minimal sum of squared joint torques into the objective function because optimizing the motion with the remaining terms (E c and E physics ) is ambiguous–there are an infinite number of physically correct motions that satisfy user constraints. We also included joint torque limits into the optimization. Without the force field priors, the “sneaky” walking appears ballistic because the
          
            
          
          (a)
          (b)
          
            Figure 5: Generating a wide variety of physically realistic walking motions: (a) normal walking; (b) walking with a big step; (c) climbing over an obstacle; (d) walking on a slope. All the motions are generated by a single statistical walking model constructed from a prerecorded walking database
          
          “minimal torque principle” is not suitable for stylized low-energy motion. With the force field priors, our system can successfully generate physically realistic stylized walking motion.
          
            
          
          (a)
          (b)
          
            Figure 6: The importance of the physics term. (a) with the physics term; (b) without the physics term. Note that with the physics term, the simulated character reacts to external elastic forces by leaning the body forward to compensate the resistance forces. Note that “yellow” characters are the starting and ending keyframes used for motion generation; foot contact constraints as shown in “green”.
          
          The importance of the physics term. This experiment demonstrated the importance of physical constraints to our motion model. We dropped off the physics term in the objective function and used the remaining terms to optimize the joint poses across the entire motion sequence. The accompanying video shows a side-by-side comparison for animating the “resistance running”. With the physics term, the character reacted appropriately to external elastic forces by leaning the body forward to compensate for the resistance forces ( Figure 6(a) ). As expected, the character did not respond to external forces without the physics term ( Figure 6(b) ). Comparison against subspace optimization. We computed the eigen-poses using the same set of training data and performed physics-based optimization in a reduced eigen-space similar to 
Safonova and her colleagues [2004]. The testing example was running→walking→jumping. Unlike Safonova and her colleagues [2004]
, we did not manually select training data to construct a reduced subspace for human poses. Instead, we used the entire locomotion database (4571 frames), which includes normal walking, running and jumping. We automatically determined the dimension of the subspace (11 dimensions) by preserving 95% of energy of the training data. To implement the subspace optimization algorithm, we formulated the problem in the spacetime framework and optimized the motion in the reduced subspace. Briefly, we minimized the sum of squared torques and smoothness of the root and joint angle trajectories over time. We also added a regularization term to penalize the deviation of eigen coefficients from zero. This optimization was also subject
          (c)
          (d)
          to foot-ground contact constraints, friction limit constraints, and the discretization of physics constraints determined by a finite difference scheme. Unlike Safonova and her colleagues [2004]
, we did not incorporate inverse kinematics as part of optimization in our implementation. We evaluated the performance of the subspace optimization technique using the same set of animation constraints, including the start and end poses as well as trajectories of the head and two feet. The accompanying video shows that subspace optimization produces uncoordinated human movements. For example, the walking character did not swing the right arm properly and the walking gait appeared very stiff. This indicates that a global subspace model for kinematic poses is not sufficient to model heterogeneous human actions. We have also observed that the motions generated by subspace methods often cannot accurately match the trajectory and contact constraints specified by the user; this might be due to compression errors caused by reduced subspace representation. In contrast, the GP-based statistical motion priors can accurately model spatial-temporal patterns in heterogeneous human actions and allow for generating physically realistic animation that matches userdefined constraints.
        
      
      
        <H1>9 Discussion and Future Work</H1>
        We introduce a statistical motion model for human motion analysis and generation. Our model combines the powers of physics-based motion modeling and statistical motion modeling. We have demonstrated the effectiveness of the new model by generating a wide variety of physically realistic motions that achieve the goals specified by the users. The incorporation of physical constraints into statistical motion models ensures generalized motions are physically plausible, thereby removing noticeable visual artifacts (e.g., unbalanced motions and motion jerkiness) in an output animation. Moreover, it enables us to create motions that react to changes in physical parameters. In our experiments, we have shown that the system can generate new motions such as “resistance running”, “moon walking”, “walking on slippery surfaces”, and “walking with a heavy foot”, a capability that has never been demonstrated in any previous statistical motion synthesis methods. Meanwhile, the use of force field priors for human motion modeling not only ensures that generated motions are natural looking but also extends physically-based modeling techniques to stylized and heterogeneous human actions. For example, we have constructed a single generative model for modeling a wide variety of physically realistic walking variations such as normal walking, walking with a sharp turn, walking on a slope, walking with a big step, and climbing over an obstacle. We have also shown that the system can generate physically realistic motion for stylized walking such as sneaky walking and for heterogeneous human actions such as running→walking→jumping. Such actions are often difficult or even impossible to be synthesized by previous physics-based motion models. We model the force field priors using Gaussian process models because GP can efficiently capture nonlinear properties of the force fields and its learning process involves very few manual tuning parameters. However, Gaussian process needs to retain all of the training data to make predictions and therefore its computational demands grow as the square and cube respectively of the number of training examples. The sparse approximation strategy works well for the current size of training data sets (less than 8,000 frames) but might not scale up for use in very large data sets. 
One possibility is to learn a probabilistic regression function for force fields using parametric statistical analysis techniques such as the mixture of experts model 
[Jacobs et al. 1991] or its variants 
[Jordan 1994]. Another limitation of our system is that it cannot generate a motion that is very different from motion examples because our approach is data-driven. In addition, the system is still unable to handle arbitrary external forces because the force field priors prevent the generated motion from moving away from prerecorded motion data. We choose to model the force field priors based on generalized forces rather than joint torques because we can conveniently compute the generalized forces from current kinematic motion capture databases (e.g., the CMU online mocap database 2 ). However, the learned force field priors can only predict resultant forces of join torques and contact forces. If both joint torque data and contact force data are available, we could construct more accurate force field priors that explicitly predict joint torques or contact forces. In the future, we plan to measure ground-reaction forces with force plates and use them along with the captured kinematic motion data to compute joint torques via inverse dynamics techniques. We formulate the constraint-based motion synthesis problem in a spacetime optimization framework. However, the optimization problem is high-dimensional and highly nonlinear; it might be subject to local minima. We found that the initialization process is critical to the success of our optimization. It not only speeds up the optimization process but also alleviates the local-minimum problem. For a long sequence of animation (e.g., Figure 5 ), we need to decompose the entire optimization into a number of spacetime windows, over which subproblems can be formulated and solved using efficient nonlinear optimization techniques. In the future, we plan to explore alternative techniques to address the local minimum problem. One possibility is the employment of a Markov chain Monte Carlo (MCMC), which comes to its solutions by efficiently drawing samples from the posterior distribution, using a Markov chain based on the Metropolis-Hastings algorithm. Similar to other constraint-based animation systems, our system requires the user to specify a sparse number of constraints, e.g., key frames and contact constraints, to generate a desired animation. However, specifying such constraints, particularly trajectory constraints and contact constraints, is not trivial for a novice user. In our experiment, we created the 3D key frames by using our homegrown data-driven inverse kinematic system 
[Wei and Chai 2010a]. Trajectory and contact constraints were either directly modified from reference motion data or rotoscoped from video streams similar to the technique described by 
Wei and Chai [2010b]. In the future, we are interested in extending our system to searching the positions and timings of contact events as part of the optimization variables, thereby avoiding the necessity of contact constraints required for constraint-based motion synthesis.
        2 http://mocap.cs.cmu.edu/
      
      
        <H1>APPENDIX A Gaussian Processes</H1>
        Gaussian processes (GP) are a powerful, non-parametric tool for regression in high-dimensional space. A GP can be thought of as a “Gaussian over functions”. Here, we briefly discuss the basic concept of Gaussian processes. Let D = {(y n , z n )|n = 1, ..., N } be the training set. For our application, we have y = [q, q]  ̇ and z = u. The goal of Gaussian processes is to learn a regression function f (·) that finds the predictive output z ∗ using a testing input y ∗ . We assume both training and testing data points are drawn from the following noisy process:
        
          12
          z n = f (y n ) + d
        
        where y n is an input vector in R and z n is a scalar output in R. The noise term is drawn from N (0, σ 2 ). For convenience, the inputs are stacked into a d×N matrix Y = [y 1 , y 2 , ..., y N ] and the outputs are stacked into a N -dimensional vector z = [z 1 , z 2 , ..., z N ]. The joint distribution over the noisy output z given inputs Y is a zero-mean Gaussian, and has the form
        
          13
          2 pr(z|Y ) = N (0, K(Y, Y ) + σ n I),
        
        where K(Y, Y ) is the kernel matrix with elements K ij = k(y i , y j ). The kernel function, k(y, y ), is a measure of the “closeness” between inputs. The term σ n 2 I introduces Gaussian noise and plays a similar role to that of in Equation (12). Given a set of test inputs Y ∗ , one would like to find the predictive output z ∗ . The noisy training outputs z and the test output z ∗ are jointly Gaussian:
        K(Y ∗ , Y ∗ ) K(Y ∗ , Y ) pr(z ∗ , z|Y ∗ , Y ) = N (0, K(Y, Y ∗ ) K(Y, Y ) + σ n 2 I ) (14) Since z is known, this Gaussian can be conditioned on z to obtain the predictive distribution for z ∗ :
        
          15
          pr(z ∗ |z, Y ∗ , Y ) = N (μ, Σ),
        
        where μ = K(Y ∗ , Y )[K(Y, Y ) + σ n 2 I] −1 z, Σ = K(Y ∗ , Y ∗ ) − K(Y ∗ , Y )[K(Y, Y ) + σ n 2 I] −1 K(Y, Y ∗ ). (16) A Gaussian process is fully described by its mean and covariance functions. These equations show that the mean function for the testing output is a linear combination of the training output z, and the weight of each input is directly related to the correlation between the testing input Y ∗ and the training input Y . Meanwhile, the uncertainty for every predictive output (i.e. covariance function) is also estimated. In this paper, we choose the squared exponential function as our kernel function:
        
          17
          k(y, y ) = σ f 2 exp −( 1 (y − y ) T W (y − y )) 2 2
        
        where σ f is the signal variance. The diagonal matrix W contains the length scales for each input dimension. The value of W ii is inversely proportional to the importance of the i-th input dimension. The parameters of the kernel function θ = [W, σ f , σ n ] can be automatically learned by maximizing the log likelihood of the training outputs given the inputs: θ max = arg max θ log pr(z|Y, θ).
      
      
        <H1>References</H1>
        
          B AZARAA , M. S., S HERALI , H. D., AND S HETTY , C. M. 1993. Nonlinear Programming: Theory and Algorithms. John Wiley and Sons Ltd. 2nd Edition.
          B IZZI , E., C HEUNG , V. C. K., D’A VELLA , A., S ALTIEL , P., AND T RESCH , M. 2008. Combining Modules for Movement. In Brain Research Reviews. 57:125-133.
          B RAND , M., AND H ERTZMANN , A. 2000. Style Machines. In Proceedings of ACM SIGGRAPH 2000. 183–192.
          C HAI , J., AND H ODGINS , J. 2005. Performance Animation from Low-dimensional Control Signals. In ACM Transactions on Graphics. 24(3):686–696.
          C HAI , J., AND H ODGINS , J. 2007. Constraint-based Motion Optimization Using A Statistical Dynamic Model. In ACM Transactions on Graphics. 26(3): Article No.8.
          C OHEN , M. F. 1992. Interactive Spacetime Control for Animation. In Proceedings of ACM SIGGRAPH 1992. 293–302.
          DA S ILVA , M., A BE , Y., AND P OPOVI C  ́ , J. 2008. Interactive simulation of stylized human locomotion. ACM Transactions on Graphics. 27(3): Article No. 82.
          D’A VELLA , A., A MD L. F ERNANDEZ , A. P., AND L ACQUANITI , F. 2006. Control of Fast-reaching Movements by Muscle Synergy Combinations. In The Journal of Neuroscience. 26(30):7791–7810.
          F ANG , A., AND P OLLARD , N. S. 2003. Efficient Synthesis of Physically Valid Human Motion. In ACM Transactions on Graphics. 22(3):417–426.
          G ROCHOW , K., M ARTIN , S. L., H ERTZMANN , A., AND P OPOVI C  ́ , Z. 2004. Style-based Inverse Kinematics. In ACM Transactions on Graphics. 23(3):522–531.
          H ODGINS , J. K., W OOTEN , W. L., B ROGAN , D. C., AND O’B RIEN , J. F. 1995. Animating Human Athletics. In Proceedings of ACM SIGGRAPH 1995. 71-78.
          I KEMOTO , L., A RIKAN , O., AND F ORSYTH , D. 2009. Generalizing motion edits with gaussian processes. ACM Transactions on Graphics. 28(1):1–12.
          J ACOBS , R., J ORDAN , M. I., N OWLAN , S. J., AND H INTON , G. E. 1991. Adaptive mixtures of local experts. In Neural Computation. 2:79-87.
          J AZAR , R. N. 2007. In Theory of Applied Robotics: Kinematics, Dynamics, and Control, Springer.
          J ORDAN , M. I. 1994. Hierarchical mixtures of experts and the em algorithm. Neural Computation. 6:181–214.
          L AU , M., B AR -J OSEPH , Z., AND K UFFNER , J. 2009. Modeling Spatial and Temporal Variation in Motion Data. In ACM Transactions on Graphics. 28(5): Article No. 171.
          L I , Y., W ANG , T., AND S HUM , H.-Y. 2002. Motion Texture: A Two-level Statistical Model for Character Synthesis. In ACM Transactions on Graphics. 21(3):465–472.
          L IU , Z., G ORTLER , S. J., AND C OHEN , M. F. 1994. Hierarchical Spacetime Control. In Proceedings of ACM SIGGRAPH 1994. 35–42.
          L IU , K., H ERTZMANN , A., AND P OPOVI C  ́ , Z. 2005. Learning Physics-Based Motion Style with Nonlinear Inverse Optimization. In ACM Transactions on Graphics. 23(3):1071–1081.
          M IN , J., C HEN , Y.-L., AND C HAI , J. 2009. Interactive generation of human animation with deformable motion models. ACM Transactions on Graphics. 29(1): Article No. 9.
          M IN , J., L IU , H., AND C HAI , J. 2010. Synthesis and editing of personalized stylistic human motion. ACM Symposium on Interactive 3D Graphics and Games.
          M UICO , U., L EE , Y., P OPOVI C  ́ , J., AND P OPOVI C  ́ , Z. 2009. Contact-aware Nonlinear Control of Dynamic Characters. ACM Transactions on Graphics. 28(3): Article No. 81.
          M UKAI , T., AND K URIYAMA , S. 2005. Geostatistical Motion Interpolation. In ACM Transactions on Graphics. 24(3):1062– 1070.
          P OLLARD , N., AND R EITSMA , P. 2001. Animation of Humanlike Characters: Dynamic Motion Filtering with A Physically Plausible Contact Model. In In Yale Workshop on Adaptive and Learning Systems.
          P OPOVI Ć , Z., AND W ITKIN , A. P. 1999. Physically Based Motion Transformation. In Proceedings of ACM SIGGRAPH 1999. 1120.
          Q UINONERO -C ANDELA , J., AND R ASMUSSEN , C. E. 2005. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research. 6:1935–1959.
          S AFONOVA , A., H ODGINS , J., AND P OLLARD , N. 2004. Synthesizing Physically Realistic Human Motion in Low-Dimensional, Behavior-Specific Spaces. In ACM Transactions on Graphics. 23(3):514–521.
          S OK , K. W., K IM , M., AND L EE , J. 2007. Simulating biped behaviors from human motion data. ACM Transactions on Graphics. 26(3): Article No. 107.
          S ULEJMANPASIC , A., AND P OPOVI Ć , J. 2005. Adaptation of Performed Ballistic Motion. In ACM Transactions on Graphics. 24(1):165–179.
          W EI , X. K., AND C HAI , J. 2010. Intuitive interactive human character posing with millions of example poses. IEEE Computer Graphics and Applications.
          W EI , X. K., AND C HAI , J. 2010. Videomocap: Modeling physically realistic human motion from monocular video sequences. ACM Transactions on Graphics. 29(4): Article No. 42.
          W ITKIN , A., AND K ASS , M. 1988. Spacetime Constraints. In Proceedings of ACM SIGGRAPH 1998. 159–168.
          Y E , Y., AND L IU , K. 2008. Animating Responsive Characters with Dynamic Constraints in Near-unactuated Coordinates. In ACM Transactions on Graphics. 27(5): Article No. 112.
          Y E , Y., AND L IU , K. 2010. Synthesis of responsive motion using a dynamic model. Computer Graphics Forum (Proceedings of Eurographics).
          Y IN , K., L OKEN , K., AND V AN DE P ANNE , M. 2007. SIMBICON: simple biped locomotion control. ACM Transactions on Graphics. 26(3): Article No. 105.
          Z ORDAN , V. B., AND H ODGINS , J. K. 2002. Motion capture-driven simulations that hit and react. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 89– 96.
        
      
    
  

</Document>
