BaseGBT

Abstract base class for all GBRL models. It defines the core API and shared logic for managing GBT learners, including training steps, gradient handling, SHAP value computation, device control, saving/loading, and visualization utilities.

class gbrl.models.base.BaseGBT[source]

Bases: ABC

copy() BaseGBT[source]

Creates a copy of the class instance

Returns:

Copy of the current instance

Return type:

BaseGBT

export_learner(filename: str, modelname: str | None = None) None[source]

Exports learner model as a C-header file

Parameters:
  • filename (str) – Absolute path and name of exported filename

  • modelname (Optional[str], optional) – Model name for export. Defaults to None.

fit(*args, **kwargs) float | Tuple[float, ...][source]

Fits multiple iterations (as in supervised learning)

Parameters:
  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

Returns:

Final loss per learner over all examples

Return type:

Union[float, Tuple[float, …]]

get_device() str | Tuple[str, str][source]

Gets GBRL device/devices per learner

Returns:

GBRL device per model

Return type:

Union[str, Tuple[str, str]]

get_iteration() int | Tuple[int, ...][source]

Gets the number of boosting iterations per learner

Returns:

Number of boosting iterations per learner

Return type:

Union[int, Tuple[int, …]]

get_num_trees(*args, **kwargs) int | Tuple[int, ...][source]

Gets the number of trees in the ensemble per learner

Parameters:
  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

Returns:

Number of trees in the ensemble per learner

Return type:

Union[int, Tuple[int, …]]

get_params() Tuple[ndarray | Tensor | Tuple[ndarray | Tensor, ...], ndarray | Tensor | Tuple[ndarray | Tensor, ...] | None][source]

Gets predicted model parameters and their respective gradients

Returns:

Tuple[Union[NumericalData, Tuple[NumericalData, …]],

Optional[Union[NumericalData, Tuple[NumericalData, …]]]]:

Model parameters and their gradients

get_schedule_learning_rates() float | Tuple[float, ...][source]

Gets learning rate values for optimizers according to schedule of ensemble

Constant schedule - no change in values. Linear schedule - learning rate value according to number of trees in the ensemble.

Returns:

Learning rate schedule per optimizer

Return type:

Union[float, Tuple[float, …]]

get_total_iterations() int[source]

Gets the total number of boosting iterations

Returns:

Total number of boosting iterations

(sum of actor and critic if they are not shared, otherwise equals get_iteration())

Return type:

int

classmethod load_learner(load_name: str, device: str) BaseGBT[source]

Loads a BaseGBT model from a file

Parameters:
  • load_name (str) – Full path to the saved model file

  • device (str) – Device to load the model onto (‘cpu’ or ‘cuda’)

Returns:

Loaded BaseGBT model instance

Return type:

BaseGBT

plot_tree(tree_idx: int, filename: str, *args, **kwargs) None[source]

Plots tree using graphviz (only works if GBRL was compiled with graphviz)

Parameters:
  • tree_idx (int) – Tree index to plot

  • filename (str) – .png filename to save

  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

print_tree(tree_idx: int, *args, **kwargs) None[source]

Prints tree information

Parameters:
  • tree_idx (int) – Tree index to print

  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

save_learner(save_path: str) None[source]

Saves model to file

Parameters:

save_path (str) – Absolute path and name of save filename

set_bias(*args, **kwargs) None[source]

Sets GBRL bias

Parameters:
  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

set_device(device: str)[source]

Sets GBRL device (either cpu or cuda)

Parameters:

device (str) – Device choice, must be in [‘cpu’, ‘cuda’]

set_feature_weights(feature_weights: ndarray | Tensor) None[source]

Sets GBRL feature weights

Parameters:

feature_weights (NumericalData) – Feature weights to set

shap(features: ndarray | Tensor, *args, **kwargs) ndarray | Tuple[ndarray, ndarray][source]

Calculates SHAP values for the entire ensemble

Implementation based on - https://github.com/yupbank/linear_tree_shap. See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192.

Parameters:
  • features (NumericalData) – Input features

  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

Returns:

SHAP values of shape

[n_samples, number of input features, number of outputs]. The output is a tuple of SHAP values per model only in the case of a separate actor-critic model.

Return type:

Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]

abstractmethod step(*args, **kwargs) None[source]

Performs a boosting step (fits a single tree on the gradients)

Parameters:
  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

tree_shap(tree_idx: int, features: ndarray | Tensor, *args, **kwargs) ndarray | Tuple[ndarray, ndarray][source]

Calculates SHAP values for a single tree

Implementation based on - https://github.com/yupbank/linear_tree_shap. See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192.

Parameters:
  • tree_idx (int) – Tree index

  • features (NumericalData) – Input features

  • *args – Variable length argument list (implementation dependent)

  • **kwargs – Arbitrary keyword arguments (implementation dependent)

Returns:

SHAP values of shape

[n_samples, number of input features, number of outputs]. The output is a tuple of SHAP values per model only in the case of a separate actor-critic model.

Return type:

Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]