BaseGBT

Abstract base class for all GBRL models. It defines the core API and shared logic for managing GBT learners, including training steps, gradient handling, SHAP value computation, device control, saving/loading, and visualization utilities.

class gbrl.models.base.BaseGBT[source]

Bases: ABC

copy() BaseGBT[source]

Copy class instance

export_learner(filename: str, modelname: str = None) None[source]

Exports learner model as a C-header file

Parameters:

filename (str) – Absolute path and name of exported filename.

fit(*args, **kwargs) float | Tuple[float, ...][source]

Fit multiple iterations (as in supervised learning)

Returns:

final loss per learner over all examples.

Return type:

Union[float, Tuple[float, …]

get_device() str | Tuple[str, str][source]

Returns GBRL device/devices per learner

Returns:

GBRL device per model

Return type:

Union[str, Tuple[str, str]]

get_iteration() int | Tuple[int, ...][source]
Returns:

number of boosting iterations per learner.

Return type:

Union[int, Tuple[int, …]]

get_num_trees(*args, **kwargs) int | Tuple[int, ...][source]

Returns number of trees in the ensemble per learner

Returns:

number of trees in the ensemble

Return type:

Union[int, Tuple[int, …]]

get_params() Tuple[ndarray, ndarray][source]

Returns predicted model parameters and their respective gradients

Returns:

Tuple[np.ndarray, np.ndarray]

get_schedule_learning_rates() float | Tuple[float, ...][source]

Gets learning rate values for optimizers according to schedule of ensemble. Constant schedule - no change in values. Linear schedule - learning rate value accordign to number of trees in the ensemble. :returns: learning rate schedule per optimizer. :rtype: Union[float, Tuple[float, …]]

get_total_iterations() int[source]
Returns:

total number of boosting iterations (sum of actor and critic if they are not shared otherwise equals get_iteration())

Return type:

int

classmethod load_learner()[source]

Loads a BaseGBT model from a file.

Parameters:
  • load_name (str) – Full path to the saved model file.

  • device (str) – Device to load the model onto (‘cpu’ or ‘cuda’).

Returns:

Loaded ParametricActor model.

Return type:

BaseGBT

plot_tree(tree_idx: int, filename: str, *args, **kwargs) None[source]

Plots tree using (only works if GBRL was compiled with graphviz)

Parameters:
  • tree_idx (int) – tree index to plot

  • filename (str) – .png filename to save

print_tree(tree_idx: int, *args, **kwargs) None[source]

Prints tree information

Parameters:

tree_idx (int) – tree index to print

save_learner(save_path: str) None[source]

Saves model to file

Parameters:

filename (str) – Absolute path and name of save filename.

set_bias(*args, **kwargs) None[source]

Sets GBRL bias

set_device(device: str)[source]

Sets GBRL device (either cpu or cuda)

Parameters:

device (str) – choices are [‘cpu’, ‘cuda’]

set_feature_weights(feature_weights: ndarray | Tensor) None[source]

Sets GBRL feature_weights

Parameters:

feature_weights (NumericalData)

shap(features: ndarray | Tensor, *args, **kwargs) ndarray | Tuple[ndarray, ndarray][source]
Calculates SHAP values for the entire ensemble

Implementation based on - https://github.com/yupbank/linear_tree_shap. See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192.

Parameters:

features (NumericalData)

Returns:

SHAP values of shap [n_samples, number of input features, number of outputs]. The output is a tuple of SHAP values per model only in the case of a separate actor-critic model.

Return type:

Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]

abstractmethod step(*args, **kwargs) None[source]

Perform a boosting step (fits a single tree on the gradients)

tree_shap(tree_idx: int, features: ndarray | Tensor, *args, **kwargs) ndarray | Tuple[ndarray, ndarray][source]
Calculates SHAP values for a single tree

Implementation based on - https://github.com/yupbank/linear_tree_shap. See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192.

Parameters:
  • tree_idx (int) – tree index

  • features (NumericalData)

Returns:

SHAP values of shap [n_samples, number of input features, number of outputs]. The output is a tuple of SHAP values per model only in the case of a separate actor-critic model.

Return type:

Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]