Dataset¶
Dataset loaders for common 3D benchmarks.
Defined in warpconvnet/dataset/.
ModelNet40Dataset¶
from warpconvnet.dataset.modelnet import ModelNet40Dataset
ModelNet40Dataset(
root_dir: str = "./data/modelnet40",
split: str = "train", # "train" or "test"
)
PyTorch Dataset for ModelNet40 point
cloud classification (40 object categories, 2048 points per shape).
Downloads and extracts the HDF5 dataset automatically on first use.
Each sample is a dict:
| Key | Shape | Dtype | Description |
|---|---|---|---|
"coords" |
(2048, 3) |
float32 | Point coordinates |
"labels" |
scalar | int64 | Class label (0-39) |
ScanNetDataset¶
from warpconvnet.dataset.scannet import ScanNetDataset
ScanNetDataset(
root: str = "./data/scannet",
split: str = "train", # "train" or "val"
voxel_size: float | None = None, # optional voxel downsampling
out_type: str = "voxel", # "point" or "voxel"
min_coord: tuple | None = None, # optional coordinate offset
)
PyTorch Dataset for ScanNet semantic
segmentation using the pre-processed data from the
OpenScene project.
Downloads the dataset automatically on first use (~1.3 GB).
Each sample is a dict:
| Key | Shape | Dtype | Description |
|---|---|---|---|
"coords" |
(N, 3) |
float32 | Point coordinates |
"colors" |
(N, 3) |
float32 | RGB colors |
"labels" |
(N,) |
int64 | Semantic labels (20 classes, 255 = ignore) |
If voxel_size is set, points are voxel-downsampled before returning.