Training a Model#
This page explains how to train a mindmap model using your generated or downloaded dataset. The training process learns spatial memory representations from demonstration data.
Prerequisites#
Make sure you have set up mindmap and are inside the interactive Docker container.
Obtain a dataset by either:
Training Process#
Train a mindmap model for your chosen task:
torchrun_local run_training.py \
--task cube_stacking \
--data_type rgbd_and_mesh \
--feature_type radio_v25_b \
--demos_train 0-6 \
--demos_valset 7-9 \
--dataset <LOCAL_DATASET_PATH> \
--base_log_dir <OUTPUT_DIR>
torchrun_local run_training.py \
--task mug_in_drawer \
--data_type rgbd_and_mesh \
--feature_type radio_v25_b \
--demos_train 0-6 \
--demos_valset 7-9 \
--dataset <LOCAL_DATASET_PATH> \
--base_log_dir <OUTPUT_DIR>
torchrun_local run_training.py \
--task drill_in_box \
--data_type rgbd_and_mesh \
--feature_type radio_v25_b \
--demos_train 0-6 \
--demos_valset 7-9 \
--dataset <LOCAL_DATASET_PATH> \
--base_log_dir <OUTPUT_DIR>
torchrun_local run_training.py \
--task stick_in_bin \
--data_type rgbd_and_mesh \
--feature_type radio_v25_b \
--demos_train 0-6 \
--demos_valset 7-9 \
--dataset <LOCAL_DATASET_PATH> \
--base_log_dir <OUTPUT_DIR>
Training Configuration#
Training demonstrations: Demos 0-6
Validation demonstrations: Demos 7-9
Data type: RGBD and mesh features for comprehensive spatial understanding
Feature type: Radio v2.5 B features for robust visual representation
Replace the following placeholders:
<LOCAL_DATASET_PATH>
: Path to your dataset directory<OUTPUT_DIR>
: Directory where checkpoints and logs will be saved
Note
The pre-trained checkpoints available in Download Checkpoints were trained on 100+ demonstrations. If you want to train on more than the 10 demonstrations provided in Download Datasets, you will need to generate additional datasets first.
Note
For more information on parameters choice and available options, see Parameters.
Checkpoint Structure#
Training checkpoints are automatically saved in the following structure:
📂 <OUTPUT_DIR>/checkpoints/<DATE_TIME_OF_TRAINING_START>/
├── best.pth
├── last.pth
└── training_args.json
Checkpoint Files#
best.pth
: The checkpoint with the lowest validation loss during training (recommended for evaluation)last.pth
: The checkpoint from the final training epochtraining_args.json
: Complete model configuration and training parameters used
Model Configuration#
When running a checkpoint in open loop evaluation or closed loop evaluation,
the model automatically loads its configuration from training_args.json
. This ensures consistency between training and evaluation.
To override the saved configuration, use the --ignore_model_args_json
flag when running evaluation scripts.