• We propose a camera-only 3D environment mapping framework for self-driving scenes based on 3D Gaussian Splatting.
• We propose a self-supervised 2D ephemerality segmentation method that can be used as autolabeling toolkit for dynamic scenes.
• We build the Mapverse benchmark to evaluate multitraverse 2D segmentation, 3D reconstruction, and neural rendering.
Abstract
Humans naturally retain memories of permanent elements, while ephemeral moments often slip through the cracks of memory.
This selective retention is crucial for robotic perception, localization, and mapping. To endow robots with this capability,
we introduce 3D Gaussian Mapping (3DGM), a self-supervised, camera-only offline mapping framework grounded in 3D Gaussian Splatting.
3DGM converts multitraverse RGB videos from the same region into a Gaussian-based environmental map while concurrently performing 2D ephemeral object segmentation.
Our key observation is that the environment remains consistent across traversals, while objects frequently change. This allows us to exploit self-supervision from repeated
traversals to achieve environment-object decomposition. More specifically, 3DGM formulates multitraverse environmental mapping as a robust differentiable
rendering problem, treating pixels of the environment and objects as inliers and outliers, respectively. Using robust feature distillation, feature residuals mining,
and robust optimization, 3DGM jointly performs 3D mapping and 2D segmentation without human intervention. We build the Mapverse benchmark, sourced from the Ithaca365 and nuPlan datasets,
to evaluate our method in unsupervised 2D segmentation, 3D reconstruction, and neural rendering. Extensive results verify the effectiveness and potential of 3DGM for self-driving and robotics.
Key Takeaways
Multitraverse consensus-dissensus can be exploited as a self-supervision signal to decompose the environment and objects.
Repeated traversals of the same location can provide more camera observations to upgrade the 3D reconstruction without LiDARs.
Robust features produced by vision foundation models are crucial for 2D consensus identification of multitraverse.
Method
Mapverse Dataset
We build the Mapverse benchmark sourced from the Ithaca365 (Carlos A. Diaz-Ruiz et al., CVPR 2022) and nuPlan (Napat Karnchanachari et al., ICRA 2024) datasets, featuring 40 locations,
each with no less than 10 traversals, totaling 467 driving video clips and 35,304 images. Ithaca365 emphasizes
its multitraverse nature in the original paper, whereas nuPlan does not explicitly mention this feature.
These two datasets capture diverse scenes to verify our method across various driving scenarios.
Both datasets use the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License (CC BY-NC-SA 4.0).
Mapverse-Ithaca365
Mapverse-nuPlan
Emerged Segmentation
Location 600 of Mapverse-Ithaca365
Location 2450 of Mapverse-Ithaca365
Location 24 of Mapverse-nuPlan
Location 28 of Mapverse-nuPlan
Location 30 of Mapverse-nuPlan
Left: Original RGB; Right: 2D Ephemerality Segmentation
Depth Map
Location 600 of Mapverse-Ithaca365
Location 2450 of Mapverse-Ithaca365
Location 24 of Mapverse-nuPlan
Location 28 of Mapverse-nuPlan
Location 30 of Mapverse-nuPlan
Left: Original RGB; Right: Depth Map (Environment-Only)
Neural Environment Rendering
Location 600 of Mapverse-Ithaca365
Location 2450 of Mapverse-Ithaca365
Location 24 of Mapverse-nuPlan
Location 28 of Mapverse-nuPlan
Location 30 of Mapverse-nuPlan
Left: Original RGB; Right: Rendered RGB (Environment-Only)
Citation
@article{li3dgm2024,
title={Memorize What Matters: Emergent scene decomposition from multitraverse},
author={Li, Yiming and Wang, Zehong and Wang, Yue and Yu, Zhiding and Gojcic, Zan
and Pavone, Marco and Feng, Chen and Alvarez, Jose M},
journal={arXiv preprint},
year={2024}
}
Paper
Acknowledgment
We express our deep gratitude to Jiawei Yang
and Sanja Fidler for their valuable feedback throughout the project.
We also thank Yurong You and
Carlos A. Diaz-Ruiz for their support with the Ithaca365 dataset,
and Shijie Zhou for his help with high-dimensional feature rendering in 3DGS.
Yiming Li gratefully acknowledges support from the NVIDIA Graduate Fellowship Program.
This website design is borrowed from XCube.