Robot Motion Diffusion Model: Motion Generation for Robotic Characters

Recent advancements in generative motion models have achieved remarkable results, enabling the synthesis of lifelike human motions from textual descriptions. These kinematic approaches, while visually appealing, often produce motions that fail to adhere to physical constraints, resulting in artifacts that impede real-world deployment. To address this issue, we introduce a novel method that integrates kinematic generative models with physics based character control. Our approach begins by training a reward surrogate to predict the performance of the downstream non-differentiable control task, offering an efficient and differentiable loss function. This reward model is then employed to fine-tune a baseline generative model, ensuring that the generated motions are not only diverse but also physically plausible for real-world scenarios. The outcome of our processing is the Robot Motion Diffusion Model (RobotMDM), a text-conditioned kinematic diffusion model that interfaces with a reinforcement learning-based tracking controller. We demonstrate theĀ  effectiveness of this method on a challenging humanoid robot, confirming its practical utility and robustness in dynamic environments.

Learn More


VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters

Recent progress in physics-based character control has made it possible to learn policies from unstructured motion data. However, it remains challenging to train a single control policy that works with diverse and unseen motions, and can be deployed to real-world physical robots. In this paper, we propose a two-stage technique that enables the control of a character with a full-body kinematic motion reference, with a focus on imitation accuracy. In a first stage, we extract a latent space encoding by training a variational autoencoder, taking short windows of motion from unstructured data as input. We then use the embedding from the time-varying latent code to train a conditional policy in a second stage, providing a mapping from kinematic input to dynamics-aware output. By keeping the two stages separate, we benefit from self-supervised methods to get better latent codes and explicit imitation rewards to avoid mode collapse. We demonstrate the efficiency and robustness of our method in simulation, with unseen user-specified motions, and on a bipedal robot, where we bring dynamic motions to the real world.

Learn More

Optimal Design of Robotic Character Kinematics

In this paper, we propose a technique that simultaneously solves for optimal design and control parameters for a robotic character whose design is parameterized with configurable joints. At the technical core of our technique is an efficient solution strategy that uses dynamic programming to solve for optimal state, control, and design parameters, together with a strategy to remove redundant constraints that commonly exist in general robot assemblies with kinematic loops.

Learn More

Transformer-based Neural Augmentation of Robot Simulation Representations

We propose to augment common simulation representations with a transformer-inspired architecture, by training a network to predict the true state of robot building blocks given their simulation state. Because we augment building blocks, rather than the full simulation state, we make our approach modular which improves generalizability and robustness.

Learn More




Designing Robotically-Constructed Metal Frame Structures

We present a computational technique that aids with the design of structurally-sound metal frames, tailored for robotic fabrication using an existing process that integrate automated bar bending, welding, and cutting. Aligning frames with structurally-favorable orientations, and decomposing models into fabricable units, we make the fabrication process scale-invariant, and frames globally align in an aesthetically-pleasing and structurally-informed manner.

Learn More