protomotions.agents.common.config module#
- class protomotions.agents.common.config.NormObsBaseConfig(normalize_obs=False, norm_clamp_value=5.0)[source]#
Bases:
objectBase configuration for modules that support optional observation normalization.
With LazyLinear, only num_out is needed - input sizes are inferred automatically. This is purely about normalization settings and output dimensions. Individual TensorDictModules add their own obs_key/out_key fields as needed.
- Attributes:
normalize_obs: Whether to normalize observations using running statistics. norm_clamp_value: Clamp normalized values to [-value, value] to prevent extreme outliers.
- __init__(normalize_obs=False, norm_clamp_value=5.0)#
- class protomotions.agents.common.config.ModuleOperationConfig[source]#
Bases:
objectConfiguration for module operations.
- __init__()#
- class protomotions.agents.common.config.ModuleOperationForwardConfig[source]#
Bases:
ModuleOperationConfigConfiguration for module operation forward.
- __init__()#
- class protomotions.agents.common.config.ModuleOperationPermuteConfig(new_order)[source]#
Bases:
ModuleOperationConfigConfiguration for module operation permute.
- Attributes:
new_order: New dimension order, e.g. [0, 2, 1] swaps dims 1 and 2.
- __init__(new_order)#
- class protomotions.agents.common.config.ModuleOperationReshapeConfig(new_shape)[source]#
Bases:
ModuleOperationConfigConfiguration for module operation reshape.
- Attributes:
new_shape: New shape. Use ‘batch_size’ for dynamic batch dim.
- __init__(new_shape)#
- class protomotions.agents.common.config.ModuleOperationSqueezeConfig(squeeze_dim)[source]#
Bases:
ModuleOperationConfigConfiguration for module operation squeeze.
- Attributes:
squeeze_dim: Dimension to squeeze (remove if size is 1).
- __init__(squeeze_dim)#
- class protomotions.agents.common.config.ModuleOperationUnsqueezeConfig(unsqueeze_dim)[source]#
Bases:
ModuleOperationConfigConfiguration for module operation unsqueeze.
- Attributes:
unsqueeze_dim: Position where to insert new dimension of size 1.
- __init__(unsqueeze_dim)#
- class protomotions.agents.common.config.ModuleOperationExpandConfig(expand_shape)[source]#
Bases:
ModuleOperationConfigConfiguration for module operation expand.
- Attributes:
expand_shape: Target shape to expand to. Use -1 to keep original size.
- __init__(expand_shape)#
- class protomotions.agents.common.config.ModuleOperationSphereProjectionConfig[source]#
Bases:
ModuleOperationConfigConfiguration for sphere projection operation (L2 normalization to unit sphere).
- __init__()#
- class protomotions.agents.common.config.ObsProcessorConfig(
- normalize_obs=False,
- norm_clamp_value=5.0,
- _target_='protomotions.agents.common.common.ObsProcessor',
- in_keys=<factory>,
- out_keys=<factory>,
- module_operations=<factory>,
Bases:
NormObsBaseConfigGeneral observation processor - applies operations and normalization.
Supports all module_operations. ForwardConfig applies normalization but skips the forward model (no MLP). Useful for reshaping, normalizing, and other tensor manipulations.
- Attributes:
normalize_obs: Whether to normalize observations using running statistics. norm_clamp_value: Clamp normalized values to [-value, value] to prevent extreme outliers. in_keys: Input tensor keys to read from TensorDict. out_keys: Output tensor keys to write to TensorDict. module_operations: Sequence of operations to apply (reshape, permute, etc).
- module_operations: List[ModuleOperationConfig]#
- __init__(
- normalize_obs=False,
- norm_clamp_value=5.0,
- _target_='protomotions.agents.common.common.ObsProcessor',
- in_keys=<factory>,
- out_keys=<factory>,
- module_operations=<factory>,
- class protomotions.agents.common.config.MLPLayerConfig(units=512, activation='relu', use_layer_norm=False)[source]#
Bases:
objectConfiguration for a single MLP layer.
- Attributes:
units: Number of neurons in this layer. activation: Activation function for this layer. use_layer_norm: Whether to apply layer normalization after activation.
- __init__(
- units=512,
- activation='relu',
- use_layer_norm=False,
- class protomotions.agents.common.config.MLPWithConcatConfig(
- normalize_obs=False,
- norm_clamp_value=5.0,
- num_out=None,
- layers=<factory>,
- _target_='protomotions.agents.common.mlp.MLPWithConcat',
- in_keys=<factory>,
- out_keys=<factory>,
- output_activation=None,
- module_operations=<factory>,
Bases:
NormObsBaseConfigConfiguration for Multi-Layer Perceptron with optional normalization.
Unified MLP configuration that supports optional input normalization. Set normalize_obs=False if you don’t want normalization (default is False).
- Attributes:
normalize_obs: Whether to normalize observations using running statistics. norm_clamp_value: Clamp normalized values to [-value, value] to prevent extreme outliers. num_out: Output dimension of the MLP. Required. layers: List of layer configurations defining the MLP architecture. in_keys: Input tensor keys to read and concatenate from TensorDict. out_keys: Output tensor keys to write to TensorDict. output_activation: Activation function for the output layer (None for linear output). module_operations: Sequence of operations including forward pass and reshapes.
- layers: List[MLPLayerConfig]#
- module_operations: List[ModuleOperationConfig]#
- __init__(
- normalize_obs=False,
- norm_clamp_value=5.0,
- num_out=None,
- layers=<factory>,
- _target_='protomotions.agents.common.mlp.MLPWithConcat',
- in_keys=<factory>,
- out_keys=<factory>,
- output_activation=None,
- module_operations=<factory>,
- class protomotions.agents.common.config.ModuleContainerConfig(
- models=<factory>,
- _target_='protomotions.agents.common.common.ModuleContainer',
- in_keys=<factory>,
- out_keys=<factory>,
Bases:
objectConfiguration for a container of modules that are executed sequentially.
Modules are processed in order, with each module’s outputs available to subsequent modules. Input keys are passed through, and all specified output keys must be produced by internal modules.
- Attributes:
models: List of module configurations to execute sequentially. in_keys: Input tensor keys required by this container. out_keys: Output tensor keys produced by this container.
- __init__(
- models=<factory>,
- _target_='protomotions.agents.common.common.ModuleContainer',
- in_keys=<factory>,
- out_keys=<factory>,
- class protomotions.agents.common.config.TransformerConfig(
- _target_='protomotions.agents.common.transformer.Transformer',
- in_keys=<factory>,
- out_keys=<factory>,
- input_and_mask_mapping=None,
- transformer_token_size=512,
- latent_dim=512,
- num_heads=4,
- ff_size=1024,
- num_layers=4,
- dropout=0.0,
- activation='relu',
- output_activation=None,
Bases:
objectConfiguration for Transformer encoder.
Multi-head self-attention transformer that processes tokenized inputs. Supports optional masking for variable-length sequences.
- Attributes:
in_keys: Input tensor keys (tokens and optional masks). out_keys: Output tensor key for transformer output (exactly one). input_and_mask_mapping: Maps input token keys to their mask keys for attention masking. transformer_token_size: Expected input token dimension size. latent_dim: Internal/output dimension of transformer. num_heads: Number of attention heads. Must divide latent_dim evenly. ff_size: Feed-forward network hidden dimension. num_layers: Number of transformer encoder layers. dropout: Dropout probability. Default 0 since RL has enough noise. activation: Activation function for feed-forward layers. output_activation: Optional activation for transformer output.
- __init__(
- _target_='protomotions.agents.common.transformer.Transformer',
- in_keys=<factory>,
- out_keys=<factory>,
- input_and_mask_mapping=None,
- transformer_token_size=512,
- latent_dim=512,
- num_heads=4,
- ff_size=1024,
- num_layers=4,
- dropout=0.0,
- activation='relu',
- output_activation=None,