Robot Motion Diffusion Model: Motion Generation for Robotic Characters
Recent advancements in generative motion models have achieved remarkable results, enabling the synthesis of lifelike human motions from textual descriptions. These kinematic approaches, while visually appealing, often produce motions that fail to adhere to physical constraints, resulting in artifacts that impede real-world deployment. To address this issue, we introduce a novel method that integrates kinematic generative models with physics based character control. Our approach begins by training a reward surrogate to predict the performance of the downstream non-differentiable control task, offering an efficient and differentiable loss function. This reward model is then employed to fine-tune a baseline generative model, ensuring that the generated motions are not only diverse but also physically plausible for real-world scenarios. The outcome of our processing is the Robot Motion Diffusion Model (RobotMDM), a text-conditioned kinematic diffusion model that interfaces with a reinforcement learning-based tracking controller. We demonstrate theĀ effectiveness of this method on a challenging humanoid robot, confirming its practical utility and robustness in dynamic environments.
Soft Pneumatic Actuator Design using Differentiable Simulation
Interactive Design of Stylized Walking Gaits for Robotic Characters
Name Pronunciation Extraction and Reuse in Human-Agent Conversation
Optimal Design of Robotic Character Kinematics
In this paper, we propose a technique that simultaneously solves for optimal design and control parameters for a robotic character whose design is parameterized with configurable joints. At the technical core of our technique is an efficient solution strategy that uses dynamic programming to solve for optimal state, control, and design parameters, together with a strategy to remove redundant constraints that commonly exist in general robot assemblies with kinematic loops.
Transformer-based Neural Augmentation of Robot Simulation Representations
We propose to augment common simulation representations with a transformer-inspired architecture, by training a network to predict the true state of robot building blocks given their simulation state. Because we augment building blocks, rather than the full simulation state, we make our approach modular which improves generalizability and robustness.
DOC: Differentiable Optimal Control for Retargeting Motions onto Legged Robots
Legged robots are designed to perform highly dynamic motions. However, it remains challenging for users to retarget expressive motions onto these complex systems. In this paper, we present a Differentiable Optimal Control (DOC) framework that facilitates the transfer of rich motions from either animals or animations onto these robots.
Improving a Robot’s Turn-Taking Behavior in Dynamic Multiparty Interactions
We present ongoing work to develop a robust and natural turn-taking behavior for a social agent to engage a dynamically changing group in a conversation.
A Versatile Inverse Kinematics Formulation for Retargeting Motions onto Robots with Kinematic Loops
Robots with kinematic loops are known to have superior mechanical performance. However, due to these loops, their modeling and control is challenging, and prevents a more widespread use. In this paper, we describe a versatile Inverse Kinematics (IK) formulation for the retargeting of expressive motions onto mechanical systems with loops.
ADD: Analytically Differentiable Dynamics for Multi-Body Systems with Frictional Contact
We present a differentiable dynamics solver that is able to handle frictional contact for rigid and deformable objects within a unified framework. Through a principled mollification of normal and tangential contact forces, our method circumvents the main difficulties inherent to the non-smooth nature of frictional contact.
Page 1 of 13