protomotions.agents.evaluators.mimic_evaluator module#

class protomotions.agents.evaluators.mimic_evaluator.MimicEvaluator(agent, fabric, config)[source]#

Bases: BaseEvaluator

Evaluator for Mimic agent’s motion tracking performance.

__init__(agent, fabric, config)[source]#

Initialize the Mimic evaluator.

Parameters:
  • agent (Any) – The Mimic agent to evaluate

  • fabric (Any) – Lightning Fabric instance for distributed training

property motion_lib: MotionLib#

Motion library (from agent).

property num_envs: int#

Number of environments (from agent).

property motion_manager: MimicMotionManager#

Motion manager (from env).

initialize_eval()[source]#

Initialize metrics dictionary with required keys.

Returns:

Dictionary of initialized MotionMetrics

Return type:

Dict

run_evaluation(metrics)[source]#

Run evaluation across multiple motions.

Parameters:

metrics (Dict) – Dictionary to collect evaluation metrics

evaluate_episode(
metrics,
active_env_ids,
active_motion_ids,
)[source]#

Evaluate a single episode for a batch of motions.

Resets the environment with the specified motions and steps through the episode until completion or max steps, accumulating metrics.

Parameters:
  • metrics (Dict) – Dictionary to collect evaluation metrics.

  • active_env_ids (MockTensor) – Tensor of environment IDs to use for this batch.

  • active_motion_ids (MockTensor) – Tensor of motion IDs to evaluate in these environments.

add_extra_obs_to_agent(obs)[source]#
update_metrics_from_env_extras(
metrics,
extras,
active_env_ids,
active_motion_ids,
actions=None,
)[source]#

Update metrics by computing tracking errors directly and looking up extras.

Computes tracking error metrics (gt_err, gr_err, etc.) directly from the simulator and motion library rather than relying on env.extras. For reward metrics and raw robot state, looks up from extras if available.

Parameters:
  • metrics (Dict) – Dictionary to update with metrics

  • extras (Dict) – Dictionary of extra information from environment step

  • active_env_ids (MockTensor) – Environment IDs being evaluated

  • active_motion_ids (MockTensor) – Motion IDs being evaluated

  • actions (MockTensor) – Actions taken this step (for action smoothness computation)

process_eval_results(metrics)[source]#

Process results and check for early termination.

Parameters:

metrics (Dict) – Dictionary of collected metrics

Returns:

  • Dict of processed metrics for logging

  • Optional score value for determining best model

Return type:

Tuple containing

cleanup_after_evaluation()[source]#

Clean up after evaluation (reset env state, etc.)

simple_test_policy(collect_metrics=False)[source]#

Evaluates the policy in evaluation mode.

Parameters:
  • collect_metrics (bool) – whether to collect metrics from the evaluation

  • True (Will print the metrics to the console if collect_metrics is)