Volumetric Mapping

Fuse RGB-D frames into a TSDF world model and compute an ESDF for collision-aware planning.

cuRobo builds a persistent volumetric world model from depth observations and known geometry, then generates a dense Euclidean Signed Distance Field (ESDF) that enables fast, differentiable collision queries for motion generation. Depth frames are fused into a block-sparse TSDF via lock-free voxel-centric integration kernels, while analytic primitives (cuboids, meshes) are stamped directly into a separate geometry channel. On demand, an ESDF is computed at task-appropriate resolution using the Parallel Banding Algorithm (PBA+), providing \(O(1)\) trilinear distance queries for robot collision spheres.

TSDF Integration
ESDF Computation

This tutorial walks through building a 3D map from an RGB-D video sequence and using it for collision-aware robot motion planning. You will learn how cuRobo’s Mapper API fuses depth frames into a compact world model and generates a signed distance field that robot planners can query efficiently.

By the end of this tutorial you will have:

  • Fused a sequence of depth images into a block-sparse TSDF

  • Stamped a known obstacle (cuboid) into the map as analytic geometry

  • Computed a dense ESDF over the workspace

  • Rendered a depth image and surface normals from any camera pose

  • Extracted a colored triangle mesh of the reconstruction

Step 1: Download the dataset

This tutorial uses the Sun3D indoor RGB-D dataset, which provides color images, depth maps, and ground-truth camera poses.

Quick start (downloads a single scene, ~1400 MB):

wget http://3dvision.princeton.edu/projects/2016/3DMatch/downloads/rgbd-datasets/sun3d-mit_76_studyroom-76-1studyroom2.zip
mkdir -p datasets/sun3d
unzip sun3d-mit_76_studyroom-76-1studyroom2.zip -d datasets/sun3d

The extracted directory should look like:

datasets/sun3d/sun3d-mit_76_studyroom-76-1studyroom2/
    camera-intrinsics.txt
    <sequence_name>/
        000001.color.png
        000001.depth.png
        000001.pose.txt
        ...

Step 2: Run the tutorial

python -m curobo.examples.getting_started.volumetric_mapping --root ./datasets/sun3d/sun3d-mit_76_studyroom-76-1studyroom2

To explore the reconstruction interactively, add --visualize. This starts a Viser server you can open in your browser at http://localhost:8080. Drag the gizmo to inspect ESDF slices through the scene.

python -m curobo.examples.getting_started.volumetric_mapping --root ./datasets/sun3d/sun3d-mit_76_studyroom-76-1studyroom2 --visualize

Step 3: Check the output

When the tutorial finishes successfully you will see:

Loading Sun3D dataset from ./datasets/sun3d...
Found 200 frames
Mapper initialized: 42.0 MB

Integrating 200 frames...
Rendering from first camera pose...
Saved renders to: ~/.cache/curobo/examples/volumetric_mapping

Computing ESDF...
Extracting mesh...
Saved mesh: output_mesh.ply (150,000 vertices)

The following files are written to ~/.cache/curobo/examples/volumetric_mapping/ (override with curobo._src.runtime.cache_dir):

  • rendered_depth.png: depth colormap rendered from the first camera pose

  • rendered_normals.png: surface normal colormap

  • rendered_shaded.png: Phong-shaded surface view

  • output_mesh.ply: colored triangle mesh of the full reconstruction

Once you have run the tutorial, open curobo.examples.getting_started.volumetric_mapping in your editor. The inline comments walk through the key design decisions: why depth is filtered before integration, how the static geometry channel differs from depth-fused surfaces, and why the ESDF is generated at a coarser resolution than the TSDF.