curobo.types module¶
Common data types module.
This module provides common data types used throughout CuRobo for representing robot states, poses, camera observations, and tensor device configurations.
Example
```python from curobo.types import JointState, Pose, CameraObservation, DeviceCfg
# Create joint state joint_state = JointState.from_position([0.0, -0.5, 0.0, -2.0, 0.0, 1.5, 0.0])
# Create pose pose = Pose(
position=[0.5, 0.0, 0.5], quaternion=[1, 0, 0, 0] # w, x, y, z
)
# Create camera observation camera_obs = CameraObservation(
depth_image=depth_tensor, intrinsics=camera_intrinsics, pose=camera_pose,
)
# Specify tensor device and dtype device_cfg = DeviceCfg(device=”cuda:0”, dtype=torch.float32) ```
- class JointState(
- position,
- velocity=None,
- acceleration=None,
- joint_names=None,
- jerk=None,
- device_cfg=DeviceCfg(device=device(type='cuda',
- index=0),
- dtype=torch.float32,
- collision_geometry_dtype=torch.float32,
- collision_gradient_dtype=torch.float32,
- collision_distance_dtype=torch.float32),
- dt=None,
- aux_data=<factory>,
- knot=None,
- knot_dt=None,
- control_space=None,
Bases:
curobo._src.state.state_base.StateJoint-space robot state (position and optional derivatives).
Convention: use
joint_stateorjsfor JointState objects; useqfor raw position tensors.- Parameters:
position (List[float] | torch.Tensor)
velocity (List[float] | torch.Tensor | None)
acceleration (List[float] | torch.Tensor | None)
jerk (List[float] | torch.Tensor | None)
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
dt (torch.Tensor | None)
aux_data (dict)
knot (torch.Tensor | None)
knot_dt (torch.Tensor | None)
control_space (curobo._src.types.control_space.ControlSpace | None)
- position: List[float] | torch.Tensor¶
- velocity: List[float] | torch.Tensor | None = None¶
- acceleration: List[float] | torch.Tensor | None = None¶
- jerk: List[float] | torch.Tensor | None = None¶
- device_cfg: curobo._src.types.device_cfg.DeviceCfg = DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32)¶
- dt: torch.Tensor | None = None¶
- knot: torch.Tensor | None = None¶
- knot_dt: torch.Tensor | None = None¶
- data_ptr()¶
- property device: torch.device¶
- property dtype: torch.dtype¶
- property shape: torch.Size¶
- static from_numpy(
- joint_names,
- position,
- velocity=None,
- acceleration=None,
- jerk=None,
- device_cfg=DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- Parameters:
position (numpy.ndarray)
velocity (numpy.ndarray | None)
acceleration (numpy.ndarray | None)
jerk (numpy.ndarray | None)
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- static from_position(
- position,
- joint_names=None,
- Parameters:
position (torch.Tensor)
- static from_state_tensor(
- state_tensor,
- joint_names=None,
- dof=7,
- static from_list(
- position,
- velocity,
- acceleration,
- device_cfg,
- Parameters:
device_cfg (DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32))
- static zeros(
- size,
- device_cfg,
- joint_names=None,
- Parameters:
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- to(device_cfg)¶
- Parameters:
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- clone()¶
- detach()¶
- copy_reference(
- in_joint_state,
Copy reference to in_joint_state
- Parameters:
in_joint_state (curobo._src.state.state_joint.JointState)
- copy_(
- in_joint_state,
- allow_clone=True,
- Parameters:
in_joint_state (curobo._src.state.state_joint.JointState)
allow_clone (bool)
- copy_data(in_joint_state)¶
Copy data from in_joint_state to self
- Parameters:
in_joint_state (curobo._src.state.state_joint.JointState)
- view(*shape)¶
- blend(coeff, new_state)¶
- Parameters:
coeff (curobo._src.state.filter_coeff.FilterCoeff)
new_state (curobo._src.state.state_joint.JointState)
- get_state_tensor()¶
- stack(new_state)¶
- Parameters:
new_state (curobo._src.state.state_joint.JointState)
- cat(other_js, dim)¶
- Parameters:
other_js (curobo._src.state.state_joint.JointState)
dim (int)
- apply_kernel(kernel_mat)¶
- scale(dt)¶
- Parameters:
dt (float | torch.Tensor)
- scale_by_dt(dt, new_dt)¶
- Parameters:
dt (torch.Tensor)
new_dt (torch.Tensor)
- scale_time(new_dt)¶
- Parameters:
new_dt (torch.Tensor)
- calculate_fd_from_position(
- dt=None,
- Parameters:
dt (torch.Tensor | None)
- reorder(joint_names)¶
Reorder joint state to match the given joint names order.
- reindex(joint_names)¶
Reindex joint state in-place to match the given joint names order.
- get_augmented_joint_state(
- joint_names,
- lock_joints=None,
- Return type:
- Parameters:
lock_joints (curobo._src.state.state_joint.JointState | None)
- append_joints(joint_state)¶
- Parameters:
joint_state (curobo._src.state.state_joint.JointState)
- gather_by_seed_index(idx)¶
- Parameters:
idx (torch.Tensor)
- copy_only_index(
- in_joint_state,
- idx,
- Parameters:
in_joint_state (curobo._src.state.state_joint.JointState)
idx (int | torch.Tensor)
- copy_at_index(
- in_joint_state,
- idx,
- Parameters:
in_joint_state (curobo._src.state.state_joint.JointState)
idx (int | torch.Tensor)
- copy_at_batch_seed_indices(
- in_joint_state,
- batch_idx,
- seed_idx,
- Parameters:
in_joint_state (curobo._src.state.state_joint.JointState)
batch_idx (torch.Tensor)
seed_idx (torch.Tensor)
- __init__(
- position,
- velocity=None,
- acceleration=None,
- joint_names=None,
- jerk=None,
- device_cfg=DeviceCfg(device=device(type='cuda',
- index=0),
- dtype=torch.float32,
- collision_geometry_dtype=torch.float32,
- collision_gradient_dtype=torch.float32,
- collision_distance_dtype=torch.float32),
- dt=None,
- aux_data=<factory>,
- knot=None,
- knot_dt=None,
- control_space=None,
- Parameters:
position (List[float] | torch.Tensor)
velocity (List[float] | torch.Tensor | None)
acceleration (List[float] | torch.Tensor | None)
jerk (List[float] | torch.Tensor | None)
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
dt (torch.Tensor | None)
aux_data (dict)
knot (torch.Tensor | None)
knot_dt (torch.Tensor | None)
control_space (curobo._src.types.control_space.ControlSpace | None)
- Return type:
None
- class RobotState(
- joint_state,
- joint_torque=None,
- cuda_robot_model_state=None,
Bases:
curobo._src.state.state_base.State- Parameters:
joint_state (curobo._src.state.state_joint.JointState)
joint_torque (torch.Tensor | None)
cuda_robot_model_state (curobo._src.robot.kinematics.kinematics_state.KinematicsState | None)
- joint_state: curobo._src.state.state_joint.JointState¶
- joint_torque: torch.Tensor | None = None¶
- cuda_robot_model_state: curobo._src.robot.kinematics.kinematics_state.KinematicsState | None = None¶
- data_ptr()¶
- detach()¶
- property robot_spheres: torch.Tensor | None¶
- property link_poses: curobo._src.types.tool_pose.ToolPose | None¶
- property tool_poses: curobo._src.types.tool_pose.ToolPose | None¶
- copy_at_batch_seed_indices(
- other,
- batch_idx,
- seed_idx,
Copy robot state at specific batch and seed indices
- Parameters:
batch_idx (torch.Tensor)
seed_idx (torch.Tensor)
- copy_only_index(other, index)¶
Copy robot state at specific indices
- Parameters:
index (int | torch.Tensor)
- clone()¶
- Return type:
- copy_(other)¶
- Parameters:
- __init__(
- joint_state,
- joint_torque=None,
- cuda_robot_model_state=None,
- Parameters:
joint_state (curobo._src.state.state_joint.JointState)
joint_torque (torch.Tensor | None)
cuda_robot_model_state (curobo._src.robot.kinematics.kinematics_state.KinematicsState | None)
- Return type:
None
- class Pose(
- position=None,
- quaternion=None,
- rotation=None,
- batch_size=1,
- name='ee_link',
- normalize_rotation=False,
Bases:
SequencePose representation used in CuRobo. You can initialize a pose by calling pose = Pose(position, quaternion).
- Parameters:
position (torch.Tensor | None)
quaternion (torch.Tensor | None)
rotation (torch.Tensor | None)
batch_size (int)
name (str)
normalize_rotation (bool)
- position: torch.Tensor | None = None¶
Position is represented as x, y, z, in meters
- quaternion: torch.Tensor | None = None¶
Quaternion is represented as w, x, y, z.
- rotation: torch.Tensor | None = None¶
Rotation is represents orientation as a 3x3 rotation matrix
- normalize_rotation: bool = False¶
quaternion input will be normalized when this flag is enabled. This is recommended when a pose comes from an external source as some programs do not send normalized quaternions.
- detach()¶
- property batch¶
- property device¶
- property ndim¶
- static from_matrix(matrix)¶
- Parameters:
matrix (numpy.ndarray | torch.Tensor)
- classmethod from_euler_xyz(
- euler_xyz,
- position=None,
Create a Pose from XYZ Euler angles and optional position.
This uses the extrinsic XYZ convention: rotations are applied around fixed world axes in order X, Y, Z.
- classmethod from_euler_xyz_intrinsic(
- euler_xyz,
- position=None,
Create a Pose from XYZ Euler angles using intrinsic convention.
Intrinsic XYZ: each rotation is applied around the current body-frame axis (X → Y → Z). This matches URDF virtual base joint chains where successive revolute joints rotate in the parent frame.
See
from_euler_xyzfor the extrinsic convention.
- get_rotation_matrix()¶
- get_rotation()¶
- stack(other_pose)¶
- Parameters:
other_pose (curobo._src.types.pose.Pose)
- unsqueeze(dim=-1)¶
- squeeze(dim=-1)¶
- apply_kernel(kernel_mat)¶
- classmethod from_numpy(
- position,
- quaternion,
- device_cfg=DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- Parameters:
position (numpy.ndarray)
quaternion (numpy.ndarray)
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- classmethod from_list(
- pose,
- device_cfg=DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- q_xyzw=False,
- Parameters:
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- classmethod from_batch_list(
- pose,
- device_cfg=DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- q_xyzw=False,
- Parameters:
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- to_list(q_xyzw=False)¶
- tolist(q_xyzw=False)¶
- clone()¶
- to(device_cfg=None, device=None)¶
- Parameters:
device_cfg (curobo._src.types.device_cfg.DeviceCfg | None)
device (torch.device | None)
- get_matrix(out_matrix=None)¶
- Parameters:
out_matrix (torch.Tensor | None)
- get_affine_matrix(out_matrix=None)¶
- Parameters:
out_matrix (torch.Tensor | None)
- get_numpy_affine_matrix()¶
- get_numpy_matrix()¶
- get_pose_vector()¶
- copy_(pose)¶
Copies pose data from another memory buffer. This will create a new instance if buffers are not same shape
- Parameters:
pose (Pose) – _description_
- static cat(pose_list)¶
- Parameters:
pose_list (List[curobo._src.types.pose.Pose])
- distance(
- other_pose,
- use_phi3=False,
- Parameters:
other_pose (curobo._src.types.pose.Pose)
use_phi3 (bool)
- angular_distance(
- other_pose,
- use_phi3=False,
This function computes the angular distance using either phi_3 or axis-angle.
See Huynh, Du Q. “Metrics for 3D rotations: Comparison and analysis.” Journal of Mathematical Imaging and Vision 35 (2009): 155-164 for phi_3 metric.
- Parameters:
current_quat – other pose quaternion. Shape: […, 4]
other_pose (curobo._src.types.pose.Pose)
use_phi3 (bool)
- Returns:
[…]
- Return type:
Angular distance in range [0,1]. Shape
- linear_distance(other_pose)¶
- Parameters:
other_pose (curobo._src.types.pose.Pose)
- multiply(
- other_pose,
- out_position=None,
- out_quaternion=None,
- Parameters:
other_pose (curobo._src.types.pose.Pose)
out_position (torch.Tensor | None)
out_quaternion (torch.Tensor | None)
- transform_point(
- points,
- out_buffer=None,
- gp_out=None,
- gq_out=None,
- gpt_out=None,
- Parameters:
points (torch.Tensor)
out_buffer (torch.Tensor | None)
gp_out (torch.Tensor | None)
gq_out (torch.Tensor | None)
gpt_out (torch.Tensor | None)
- transform_points(
- points,
- out_buffer=None,
- gp_out=None,
- gq_out=None,
- gpt_out=None,
- Parameters:
points (torch.Tensor)
out_buffer (torch.Tensor | None)
gp_out (torch.Tensor | None)
gq_out (torch.Tensor | None)
gpt_out (torch.Tensor | None)
- batch_transform_points(
- points,
- out_buffer=None,
- gp_out=None,
- gq_out=None,
- gpt_out=None,
- Parameters:
points (torch.Tensor)
out_buffer (torch.Tensor | None)
gp_out (torch.Tensor | None)
gq_out (torch.Tensor | None)
gpt_out (torch.Tensor | None)
- batch_transform_points_inverse(
- points,
- out_buffer=None,
- gp_out=None,
- gq_out=None,
- gpt_out=None,
- Parameters:
points (torch.Tensor)
out_buffer (torch.Tensor | None)
gp_out (torch.Tensor | None)
gq_out (torch.Tensor | None)
gpt_out (torch.Tensor | None)
- property shape¶
- compute_offset_pose(offset)¶
- Return type:
- Parameters:
offset (curobo._src.types.pose.Pose)
- compute_local_pose(world_pose)¶
- Return type:
- Parameters:
world_pose (curobo._src.types.pose.Pose)
- __init__(
- position=None,
- quaternion=None,
- rotation=None,
- batch_size=1,
- name='ee_link',
- normalize_rotation=False,
- Parameters:
position (torch.Tensor | None)
quaternion (torch.Tensor | None)
rotation (torch.Tensor | None)
batch_size (int)
name (str)
normalize_rotation (bool)
- Return type:
None
- class ToolPose(
- tool_frames,
- position,
- quaternion,
Bases:
Sequence4D FK output:
[batch, horizon, num_links, 3/4].No goalset dimension; this represents the current state of the robot, not a goal target.
- Parameters:
position (torch.Tensor)
quaternion (torch.Tensor)
- position: torch.Tensor¶
Position tensor
[batch, horizon, num_links, 3].
- quaternion: torch.Tensor¶
Quaternion tensor
[batch, horizon, num_links, 4](wxyz).
- property shape¶
- property ndim¶
- property device¶
- get_link_pose(
- link_name,
- make_contiguous=False,
Extract a single link as a 2D Pose
[B*H, 3/4].
- to_dict(make_contiguous=True)¶
Convert to dictionary mapping link names to 2D Poses.
- copy_(other)¶
- Parameters:
- reorder_links(
- ordered_tool_frames,
Reorder links. Returns a new ToolPose.
- as_goal(
- ordered_tool_frames=None,
Convert to GoalToolPose by adding a goalset dimension (num_goalset=1).
- __init__(
- tool_frames,
- position,
- quaternion,
- Parameters:
position (torch.Tensor)
quaternion (torch.Tensor)
- Return type:
None
- class GoalToolPose(
- tool_frames,
- position,
- quaternion,
Bases:
Sequence5D goal specification:
[batch, horizon, num_links, num_goalset, 3/4].Used only on the goal/target side. horizon=1 for static goals, horizon>1 for per-timestep targets.
- Parameters:
position (torch.Tensor)
quaternion (torch.Tensor)
- position: torch.Tensor¶
Position tensor
[batch, horizon, num_links, num_goalset, 3].
- quaternion: torch.Tensor¶
Quaternion tensor
[batch, horizon, num_links, num_goalset, 4](wxyz).
- property shape¶
- property ndim¶
- property device¶
- classmethod from_poses(
- pose_dict,
- ordered_tool_frames=None,
- num_goalset=1,
Build from per-link Pose objects.
Each Pose has position
[batch, 3]or[batch * num_goalset, 3].- Return type:
- Returns:
GoalToolPose
[batch, 1, num_links, num_goalset, 3/4](horizon=1).- Parameters:
- get_link_pose(
- link_name,
- make_contiguous=False,
Extract a single link as a 2D Pose
[B*H*G, 3/4].
- copy_(other)¶
- Parameters:
- clone()¶
- Return type:
- detach()¶
- Return type:
- reorder_links(
- ordered_tool_frames,
- Return type:
- Parameters:
- __init__(
- tool_frames,
- position,
- quaternion,
- Parameters:
position (torch.Tensor)
quaternion (torch.Tensor)
- Return type:
None
- class ToolPoseCriteria(
- terminal_pose_axes_weight_factor=None,
- non_terminal_pose_axes_weight_factor=None,
- terminal_pose_convergence_tolerance=None,
- non_terminal_pose_convergence_tolerance=None,
- project_distance_to_goal=False,
- device_cfg=DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
Bases:
objectCriteria for a link pose.
This class is used to define the nature of the cost between the current pose and the goal pose. This used as part of the goalset cost term.
- Parameters:
terminal_pose_axes_weight_factor (torch.Tensor | List[float] | None)
non_terminal_pose_axes_weight_factor (torch.Tensor | List[float] | None)
terminal_pose_convergence_tolerance (torch.Tensor | List[float] | None)
non_terminal_pose_convergence_tolerance (torch.Tensor | List[float] | None)
project_distance_to_goal (torch.Tensor | bool)
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- terminal_pose_axes_weight_factor: torch.Tensor | List[float] | None = None¶
Factor vector that scales each axis (x,y,z,roll,pitch,yaw) of the terminal position and orientation. This is multiplied with the weight.
- non_terminal_pose_axes_weight_factor: torch.Tensor | List[float] | None = None¶
Factor vector that scales each axis (x,y,z,roll,pitch,yaw) of the non-terminal position and orientation. This is multiplied with the weight.
- terminal_pose_convergence_tolerance: torch.Tensor | List[float] | None = None¶
Convergence tolerance for the terminal position and orientation. This should be of shape (2,). Position unit is meter and orientation unit is radian.
- non_terminal_pose_convergence_tolerance: torch.Tensor | List[float] | None = None¶
Convergence tolerance for the non-terminal position and orientation. This should be of shape (2,). Position unit is meter and orientation unit is radian.
- project_distance_to_goal: torch.Tensor | bool = False¶
If true, the distance is computed after projecting the current pose to the goal frame.
- device_cfg: curobo._src.types.device_cfg.DeviceCfg = DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32)¶
- clone()¶
- copy_(other)¶
- Parameters:
other (curobo._src.cost.tool_pose_criteria.ToolPoseCriteria)
- static track_orientation(
- rpy=[0.001, 0.001, 0.001],
- non_terminal_scale=1.0,
- static track_position_and_orientation(
- xyz=[1.0, 1.0, 1.0],
- rpy=[1.0, 1.0, 1.0],
- non_terminal_scale=0.1,
- static linear_motion(
- axis='z',
- non_terminal_scale=1.0,
- project_distance_to_goal=True,
- static disabled()¶
Create criteria that disables pose tracking for this tool frame.
Use this when you want to include a tool frame in the solver but not apply any pose cost to it.
- Returns:
ToolPoseCriteria with all weight factors set to zero.
- __init__(
- terminal_pose_axes_weight_factor=None,
- non_terminal_pose_axes_weight_factor=None,
- terminal_pose_convergence_tolerance=None,
- non_terminal_pose_convergence_tolerance=None,
- project_distance_to_goal=False,
- device_cfg=DeviceCfg(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- Parameters:
terminal_pose_axes_weight_factor (torch.Tensor | List[float] | None)
non_terminal_pose_axes_weight_factor (torch.Tensor | List[float] | None)
terminal_pose_convergence_tolerance (torch.Tensor | List[float] | None)
non_terminal_pose_convergence_tolerance (torch.Tensor | List[float] | None)
project_distance_to_goal (torch.Tensor | bool)
device_cfg (curobo._src.types.device_cfg.DeviceCfg)
- Return type:
None
- class CameraObservation(
- name='camera_image',
- rgb_image=None,
- depth_image=None,
- image_segmentation=None,
- projection_matrix=None,
- projection_rays=None,
- resolution=None,
- pose=None,
- intrinsics=None,
- timestamp=None,
- depth_to_meter=0.001,
- feature_grid=None,
Bases:
object- Parameters:
name (str)
rgb_image (torch.Tensor | None)
depth_image (torch.Tensor | None)
image_segmentation (torch.Tensor | None)
projection_matrix (torch.Tensor | None)
projection_rays (torch.Tensor | None)
pose (curobo._src.types.pose.Pose | None)
intrinsics (torch.Tensor | None)
timestamp (torch.Tensor | None)
depth_to_meter (float)
feature_grid (torch.Tensor | None)
- rgb_image: torch.Tensor | None = None¶
rgb image format is BxHxWxchannels
- depth_image: torch.Tensor | None = None¶
- image_segmentation: torch.Tensor | None = None¶
- projection_matrix: torch.Tensor | None = None¶
- projection_rays: torch.Tensor | None = None¶
- pose: curobo._src.types.pose.Pose | None = None¶
- intrinsics: torch.Tensor | None = None¶
Batch of intrinsics matrices of shape (b, 3, 3). contains the following fields: [[fx, 0, cx], [0, fy, cy], [0, 0, 1]]
- Type:
intrinsics_matrix
- timestamp: torch.Tensor | None = None¶
- feature_grid: torch.Tensor | None = None¶
Optional neural feature grid. Shape
(num_cameras, feature_H, feature_W, feature_dim)float16, channels-last with stride 1 on the channel dim. Dense per-pixel features are represented byfeature_H == image_Handfeature_W == image_W. Consumed by the mapper’s per-block feature integration kernel whenfeature_dim > 0.Magnitude convention: per-element values must be
O(1)(e.g. L2-normalized embeddings). The mapper stores per-block feature accumulators as fp16 with a post-frame weight cap; inputs well outside this range can overflow the per-thread footprint sum within a single frame before the cap engages.
- property shape¶
- copy_(new_data)¶
- Parameters:
new_data (curobo._src.types.camera.CameraObservation)
- clone()¶
- to(device)¶
- Parameters:
device (torch.device)
- extract_depth_from_structured_pointcloud(
- pointcloud,
- output_image=None,
Extract depth image from structured pointcloud.
This function assumes the pointcloud maintains the spatial grid structure [batch, height, width, 3] where Z-axis represents depth values.
Important: This only works for structured pointclouds in camera frame where Z-axis is aligned with depth. If the pointcloud has been transformed to world frame (e.g., via self.pose), this will NOT work correctly.
- update_projection_rays()¶
- stack(
- new_observation,
- dim=0,
- Parameters:
new_observation (curobo._src.types.camera.CameraObservation)
dim (int)
- __init__(
- name='camera_image',
- rgb_image=None,
- depth_image=None,
- image_segmentation=None,
- projection_matrix=None,
- projection_rays=None,
- resolution=None,
- pose=None,
- intrinsics=None,
- timestamp=None,
- depth_to_meter=0.001,
- feature_grid=None,
- Parameters:
name (str)
rgb_image (torch.Tensor | None)
depth_image (torch.Tensor | None)
image_segmentation (torch.Tensor | None)
projection_matrix (torch.Tensor | None)
projection_rays (torch.Tensor | None)
pose (curobo._src.types.pose.Pose | None)
intrinsics (torch.Tensor | None)
timestamp (torch.Tensor | None)
depth_to_meter (float)
feature_grid (torch.Tensor | None)
- Return type:
None
- class ContentPath(
- robot_config_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/robot'),
- robot_xrdf_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/robot'),
- robot_urdf_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets'),
- robot_asset_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets'),
- scene_config_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/scene'),
- world_asset_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets'),
- robot_config_absolute_path=None,
- robot_xrdf_absolute_path=None,
- robot_urdf_absolute_path=None,
- robot_asset_absolute_path=None,
- scene_config_absolute_path=None,
- robot_config_file=None,
- robot_xrdf_file=None,
- robot_urdf_file=None,
- robot_asset_subroot_path=None,
- scene_config_file=None,
Bases:
objectDataclass to store root path of configuration and assets.
- Parameters:
robot_config_root_path (str)
robot_xrdf_root_path (str)
robot_urdf_root_path (str)
robot_asset_root_path (str)
scene_config_root_path (str)
world_asset_root_path (str)
robot_config_absolute_path (str | None)
robot_xrdf_absolute_path (str | None)
robot_urdf_absolute_path (str | None)
robot_asset_absolute_path (str | None)
scene_config_absolute_path (str | None)
robot_config_file (str | None)
robot_xrdf_file (str | None)
robot_urdf_file (str | None)
robot_asset_subroot_path (str | None)
scene_config_file (str | None)
- robot_config_root_path: str = PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/robot')¶
Root path for robot configuration file, either xrdf or yml.
- robot_xrdf_root_path: str = PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/robot')¶
Root path for robot XRDF.
- robot_urdf_root_path: str = PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets')¶
Root path for robot URDF.
- robot_asset_root_path: str = PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets')¶
Root path for robot meshes and textures.
- scene_config_root_path: str = PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/scene')¶
Root path for world description files (yml).
- world_asset_root_path: str = PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets')¶
Root path for world assets (meshes, nvblox maps).
- robot_config_absolute_path: str | None = None¶
Absolute path to the robot configuration file. If this is provided, the :var:`robot_config_root_path`` will be ignored.
- robot_xrdf_absolute_path: str | None = None¶
Absolute path to the robot XRDF file. If this is provided, the :var:`robot_xrdf_root_path` will be ignored.
- robot_urdf_absolute_path: str | None = None¶
Absolute path to the robot URDF file. If this is provided, the :var:`robot_urdf_root_path` will be ignored.
- robot_asset_absolute_path: str | None = None¶
Absolute path to the robot assets. If this is provided, the :var:`robot_asset_root_path` will be ignored.
- scene_config_absolute_path: str | None = None¶
Absolute path to the world description file. If this is provided, the :var:`scene_config_root_path` will be ignored.
- robot_config_file: str | None = None¶
Relative path to the robot configuration file. If this is provided, the robot_config_absolute_path is initialized with the concatenation of robot_config_root_path and robot_config_file.
- robot_xrdf_file: str | None = None¶
Relative path to the robot XRDF file. If this is provided, the robot_xrdf_absolute_path is initialized with the concatenation of robot_xrdf_root_path and robot_xrdf_file.
- robot_urdf_file: str | None = None¶
Relative path to the robot URDF file. If this is provided, the robot_urdf_absolute_path is initialized with the concatenation of robot_urdf_root_path and robot_urdf_file.
- scene_config_file: str | None = None¶
Relative path to the world description file. If this is provided, the scene_config_absolute_path is initialized with the concatenation of scene_config_root_path and scene_config_file.
- get_robot_configuration_path()¶
Get the robot configuration path.
- __init__(
- robot_config_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/robot'),
- robot_xrdf_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/robot'),
- robot_urdf_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets'),
- robot_asset_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets'),
- scene_config_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/configs/scene'),
- world_asset_root_path=PosixPath('/home/runner/work/curobo/curobo/curobo/content/assets'),
- robot_config_absolute_path=None,
- robot_xrdf_absolute_path=None,
- robot_urdf_absolute_path=None,
- robot_asset_absolute_path=None,
- scene_config_absolute_path=None,
- robot_config_file=None,
- robot_xrdf_file=None,
- robot_urdf_file=None,
- robot_asset_subroot_path=None,
- scene_config_file=None,
- Parameters:
robot_config_root_path (str)
robot_xrdf_root_path (str)
robot_urdf_root_path (str)
robot_asset_root_path (str)
scene_config_root_path (str)
world_asset_root_path (str)
robot_config_absolute_path (str | None)
robot_xrdf_absolute_path (str | None)
robot_urdf_absolute_path (str | None)
robot_asset_absolute_path (str | None)
scene_config_absolute_path (str | None)
robot_config_file (str | None)
robot_xrdf_file (str | None)
robot_urdf_file (str | None)
robot_asset_subroot_path (str | None)
scene_config_file (str | None)
- Return type:
None
- class DeviceCfg(
- device=device(type='cuda', index=0),
- dtype=torch.float32,
- collision_geometry_dtype=torch.float32,
- collision_gradient_dtype=torch.float32,
- collision_distance_dtype=torch.float32,
Bases:
objectConfiguration for device and data types used in tensor operations.
- Parameters:
device (torch.device)
dtype (torch.dtype)
collision_geometry_dtype (torch.dtype)
collision_gradient_dtype (torch.dtype)
collision_distance_dtype (torch.dtype)
- device: torch.device = device(type='cuda', index=0)¶
- dtype: torch.dtype = torch.float32¶
- collision_geometry_dtype: torch.dtype = torch.float32¶
- collision_gradient_dtype: torch.dtype = torch.float32¶
- collision_distance_dtype: torch.dtype = torch.float32¶
- to_device(data_tensor)¶
- to_int8_device(data_tensor)¶
- cpu()¶
- as_torch_dict()¶
- is_same_torch_device(other)¶
Check if a torch.device refers to the same physical device as this config.
Handles the case where “cuda” and “cuda:0” are equivalent devices.
- __init__(
- device=device(type='cuda', index=0),
- dtype=torch.float32,
- collision_geometry_dtype=torch.float32,
- collision_gradient_dtype=torch.float32,
- collision_distance_dtype=torch.float32,
- Parameters:
device (torch.device)
dtype (torch.dtype)
collision_geometry_dtype (torch.dtype)
collision_gradient_dtype (torch.dtype)
collision_distance_dtype (torch.dtype)
- Return type:
None