Scalable Real-Time Motions with Modular Latent Generative Model and Smart Primitives

ACM Transactions on Graphics · SIGGRAPH 2026

Switch teaser:
1NVIDIA 2ETH Zürich 3Simon Fraser University 4The University of Texas at Austin
Joint first authors  ·  Corresponding author

Overview

MotionBricks overview — unified control across animation and robotics

MotionBricks achieves 15000 FPS and 2 ms latency covering over 350,000 motion skills by a single neural backbone.

We introduce MotionBricks: a large-scale, real-time generative framework with a two-fold solution. First, we propose a large-scale modular latent generative backbone tailored for robust real-time motion generation, effectively modeling a dataset of over 350,000 motion clips with a single model. Second, we introduce smart primitives that provide a unified, robust, and intuitive interface for authoring both navigation and object interaction. Notably, MotionBricks applies to new downstream tasks in a zero-shot manner — no fine-tuning or task-specific tagging required — so applications can be assembled in a plug-and-play manner like stacking bricks, without expert animation knowledge.

Uncut Animation Demo

Here we showcase a 2:40 minutes uncut UE5 demo spanning the full spectrum of navigation and object-scene interaction required for industry level game designs. Every motion is generated by neural networks — no foot-locking, no blending, no collision detection, no hand-authored transitions.
To our knowledge, MotionBricks is the first neural model to reach this level of motion quality, controllability, complexity, and completeness.

Download high-resolution version (1.2 GB, 4K)

Smart Locomotion

Smart locomotion offers a unified, plug-and-play interface for navigation — seamlessly and robustly composing natural motions from arbitrary velocity, heading, and style commands, with no retraining or per-task tuning.

Single Styles

Distinct stylized locomotion — injured, zombie, skipping, strafing — generated zero-shot from a single smart-primitive prompt.

  • Zombie Style
  • Injured-Leg Style
  • Injured-Torso Style
  • Skipping Style
  • Strafing
  • Crouch Strafing

Mixture of Styles

Continuous runtime transitions across speed, direction, and gait — dialed live from user commands.

  • Freestyle
  • Idle · Walk · Jog · Run

Smart Objects

Smart objects specify scene and object interaction as a flexible set of proxy keyframes; the backbone fills in the approach, contact, and follow-through — with natural variation across runs.

  • Pick-up Sword
  • Falling
  • Jump over Bench
  • Sitting
  • Interactive Authoring

Smart Primitive Setup

Authoring smart-object behaviors end-to-end inside Unreal Engine 5 — plug-and-play like stacking bricks, with no animation graph wiring and no expert animation knowledge required.

In-betweening Comparisons

Side-by-side against six state-of-the-art in-betweening baselines.

Code & Data Release

MotionBricks is now a core organic component of NVIDIA's GR00T Whole-Body Control effort, powering the motion-generation layer of the stack.

As an initial preview release, the code is available at GR00T-WholeBodyControl/motionbricks. It ships two components: (1) an interactive G1 demo that lets anyone play with a lightweight MotionBricks-controlled G1 out of the box, and (2) a self-contained synthetic training pipeline with additional instructions for incorporating the BONES-SEED dataset. Together, these let the community reproduce our core training loop and start training their own MotionBricks-style policies immediately.

A full release — a model fully embedded in GR00T Whole-Body Control's robotics formulation, along with the complete training pipeline — is targeted for approximately one month out. Reproducibility experiments are already in flight; please check back for updates.

Acknowledgments

We thank Cyrus Hogg, John Malaska, Will Telford, Jon Shepard, Simon Ouellet, Dmitry Korobchenko, Anna Minx, Edy Lim, Eugene Jeong, Sam Wu, Ehsan Hassani, Charles Zhou, Freya Li, Ling Li, Qiao Wang, and Lina Song for their support and guidance throughout the project.

We thank Alberto Guerra, Boon Cotter, Byeong Gyu Park, Craig Christian, Gabriele Leone, Yenal Kal, Pierre Fleau, Miguel Guerrero, and Marc-Andre Carbonneau from the art team for their help in creating the Unreal Engine 5 demo and its assets.

We also thank Hung Yu Ling, Yue Zhao, Danila Krivenkov, Evgenii Tumanov, Morteza Ramezanali, Winston Chen, Yu-Shiang Wong, Jiefeng Li, Yeongho Seol, Jun Saito, Michael Buttner, Haotian Zhang, Yifeng Jiang, Chen Tessler, Sanja Fidler, Umar Iqbal, Jan Kautz, Yan Chang, and Jim Fan for their helpful discussions and feedback.

BibTeX

Coming soon...