Runtime Configuration

cuRobo runtime configuration flags.

These module-level flags control runtime behavior. Set them in Python before creating any cuRobo objects:

import curobo._src.runtime as runtime

runtime.cuda_graphs = False   # disable CUDA graph capture
runtime.torch_compile = True  # enable torch.compile
torch_compile: bool = False

Enable torch.compile for eligible cuRobo kernels.

torch_compile_slow: bool = False

Enable torch.compile for operations that are slow to compile.

torch_jit: bool = False

Enable torch.jit.script for eligible cuRobo functions.

cuda_graphs: bool = True

Enable CUDA graph capture for solvers. Significantly improves throughput at the cost of a one-time warm-up overhead.

cuda_graph_reset: bool = False

Enable CUDA graph reset when tensor memory references change. Requires CUDA 12.0+.

cuda_streams: bool = True

Enable separate CUDA streams for parallel kernel execution.

cuda_event_timers: bool = True

Enable CUDA event-based timers for profiling.

cuda_core_backend: bool = True

Enable the CUDA core runtime backend.

kernel_backend: str = 'auto'

Kernel backend to use. One of "auto", "cuda_core", or "pybind". "auto" tries cuda_core first and falls back to pybind.

cache_dir: str = '/home/runner/.cache/curobo'

Directory for cuRobo outputs (example results, saved configs, etc.). Defaults to ~/.cache/curobo.

debug: bool = False

Enable general debug mode (additional checks and logging).

debug_cuda_graphs: bool = False

Enable CUDA graph debug mode.

debug_cuda_compile: bool = False

Enable debug compilation flags for CUDA kernels.

debug_nan: bool = False

If True, disable CUDA graphs and check for NaN values during optimization.

debug_timers: bool = False

Enable debug timing output and print results to stdout.

debug_trajopt: bool = False

Print trajectory optimization results for debugging.

profiler: bool = False

Enable profiler hooks (e.g., for Nsight Systems).