GBRL Module

The GBRL module includes the base class from which specific implementations of various Actor-Critic algorithms inherit. The GBRL module can also be used as a standalone class for supervised or online learning tasks.


class gbrl.gbt.GBRL(tree_struct: Dict, output_dim: int, optimizer: Dict | List[Dict], gbrl_params: Dict = {}, verbose: int = 0, device: str = 'cpu')[source]

Bases: object

copy() GBRL[source]

Copy class instance

Returns:

copy of current instance. The actual type will be the type of the subclass that calls this method.

Return type:

GradientBoostingTrees

export_model(filename: str, modelname: str = None) None[source]

Exports model as a C-header file

Parameters:

filename (str) – Absolute path and name of exported filename.

fit(X: ndarray | Tensor, targets: ndarray | Tensor, iterations: int, shuffle: bool = True, loss_type: str = 'MultiRMSE') float[source]

Fit multiple iterations (as in supervised learning)

Parameters:
  • X (Union[np.ndarray, th.Tensor]) – inputs

  • targets (Union[np.ndarray, th.Tensor]) – targets

  • iterations (int) – number of boosting iterations

  • shuffle (bool, optional) – Shuffle dataset. Defaults to True.

  • loss_type (str, optional) – Loss to use (only MultiRMSE is currently implemented ). Defaults to ‘MultiRMSE’.

Returns:

final loss over all examples.

Return type:

float

get_device() str | Tuple[str, str][source]

Returns GBRL device/devices (if multiple GBRL models)

Returns:

GBRL device per model

Return type:

Union[str, Tuple[str, str]]

get_iteration() int[source]
Returns:

number of boosting iterations

Return type:

int

get_num_trees() int[source]

Returns number of trees in the ensemble

Returns:

number of trees in the ensemble

Return type:

int

get_params() Tuple[ndarray, ndarray][source]

Returns predicted model parameters and their respective gradients

Returns:

Tuple[np.ndarray, np.ndarray]

get_schedule_learning_rates() Tuple[float, float][source]

Gets learning rate values for optimizers according to schedule of ensemble. Constant schedule - no change in values. Linear schedule - learning rate value accordign to number of trees in the ensemble. :returns: learning rate schedule per optimizer. :rtype: Tuple[float, float]

get_total_iterations() int[source]
Returns:

total number of boosting iterations (sum of actor and critic if they are not shared otherwise equals get_iteration())

Return type:

int

classmethod load_model(load_name: str) GBRL[source]

Loads GBRL model from a file

Parameters:

load_name (str) – full path to file name

Returns:

GBRL instance

plot_tree(tree_idx: int, filename: str) None[source]

Plots tree using (only works if GBRL was compiled with graphviz)

Parameters:
  • tree_idx (int) – tree index to plot

  • filename (str) – .png filename to save

print_tree(tree_idx: int) None[source]

Prints tree information

Parameters:

tree_idx (int) – tree index to print

reset_params()[source]

Resets param attributes

save_model(save_path: str) None[source]

Saves model to file

Parameters:

filename (str) – Absolute path and name of save filename.

set_bias(bias: ndarray | Tensor)[source]

Sets GBRL bias

Parameters:

y (Union[np.ndarray, th.Tensor]) – _description_

set_bias_from_targets(targets: ndarray | Tensor)[source]

Sets bias as mean of targets

Parameters:

targets (Union[np.ndarray, th.Tensor]) – Targets

set_device(device: str)[source]

Sets GBRL device (either cpu or cuda)

Parameters:

device (str) – choices are [‘cpu’, ‘cuda’]

shap(features: ndarray | Tensor) ndarray | Tuple[ndarray, ndarray][source]
Calculates SHAP values for the entire ensemble

Implementation based on - https://github.com/yupbank/linear_tree_shap See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192

Parameters:

features (Union[np.ndarray, th.Tensor])

Returns:

SHAP values of shap [n_samples, number of input features, number of outputs]. The output is a tuple of SHAP values per model only in the case of a separate actor-critic model.

Return type:

Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]

step(X: ndarray | Tensor, max_grad_norm: float = None, grad: ndarray | Tensor | None = None) None[source]

Perform a boosting step (fits a single tree on the gradients)

Parameters:
  • X (Union[np.ndarray, th.Tensor]) – inputs

  • max_grad_norm (float, optional) – perform gradient clipping by norm. Defaults to None.

  • grad (Optional[Union[np.ndarray, th.Tensor]], optional) – manually calculated gradients. Defaults to None.

tree_shap(tree_idx: int, features: ndarray | Tensor) ndarray | Tuple[ndarray, ndarray][source]
Calculates SHAP values for a single tree

Implementation based on - https://github.com/yupbank/linear_tree_shap See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192

Parameters:
  • tree_idx (int) – tree index

  • features (Union[np.ndarray, th.Tensor])

Returns:

SHAP values of shap [n_samples, number of input features, number of outputs]. The output is a tuple of SHAP values per model only in the case of a separate actor-critic model.

Return type:

Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]