GBTLearner

Concrete implementation of BaseLearner that wraps a single gradient boosting tree ensemble using the GBRL C++ backend. It supports training, prediction, saving/loading, and SHAP value computation.

class gbrl.learners.gbt_learner.GBTLearner(input_dim: int, output_dim: int, tree_struct: Dict, optimizers: Dict | List, params: Dict, verbose: int = 0, device: str = 'cpu')[source]

Bases: BaseLearner

GBTLearner is a gradient boosted tree learner that utilizes a C++ backend for efficient computation. It supports training, prediction, saving, loading, and SHAP value computation.

distil(obs: ndarray | Tensor, targets: ndarray, params: Dict, verbose: int = 0) Tuple[int, Dict][source]

Distills the model into a student model.

Parameters:
  • obs (NumericalData) – Input observations.

  • targets (np.ndarray) – Target values.

  • params (Dict) – Distillation parameters.

  • verbose (int, optional) – Verbosity level. Defaults to 0.

Returns:

The final loss and updated parameters.

Return type:

Tuple[int, Dict]

export(filename: str, modelname: str = None) None[source]

Exports the model to a C header file.

Parameters:
  • filename (str) – The filename to export the model to.

  • modelname (str, optional) – The name of the model in the C code. Defaults to None.

fit(features: ndarray | Tensor, targets: ndarray | Tensor, iterations: int, shuffle: bool = True, loss_type: str = 'MultiRMSE') float[source]

Fits the model to the provided features and targets for a given number of iterations.

Parameters:
  • features (NumericalData) – Input features.

  • targets (NumericalData) – Target values.

  • iterations (int) – Number of training iterations.

  • shuffle (bool, optional) – Whether to shuffle the data. Defaults to True.

  • loss_type (str, optional) – Type of loss function. Defaults to ‘MultiRMSE’.

Returns:

The final loss value.

Return type:

float

get_bias() ndarray[source]

Returns the bias of the model.

Returns:

The bias.

Return type:

np.ndarray

get_device() str[source]

Returns the device the model is running on.

Returns:

The device.

Return type:

str

get_feature_weights() ndarray[source]

Returns the feature weights of the model.

Returns:

The feature weights.

Return type:

np.ndarray

get_iteration() int[source]

Returns the current iteration number.

Returns:

The current iteration number.

Return type:

int

get_num_trees() int[source]

Returns the total number of trees in the ensemble.

Returns:

The total number of trees.

Return type:

int

get_schedule_learning_rates() int | Tuple[int, int][source]

Returns the learning rates of the schedulers.

Returns:

The learning rates.

Return type:

Union[int, Tuple[int, int]]

classmethod load(filename: str, device: str) GBTLearner[source]

Loads a GBTLearner model from a file.

Parameters:
  • filename (str) – The filename to load the model from.

  • device (str) – The device to load the model onto.

Returns:

The loaded GBTLearner instance.

Return type:

GBTLearner

plot_tree(tree_idx: int, filename: str) None[source]

Plots the tree at the given index and saves it to a file.

Parameters:
  • tree_idx (int) – The index of the tree to plot.

  • filename (str) – The filename to save the plot to.

predict(features: ndarray | Tensor, requires_grad: bool = True, start_idx: int = 0, stop_idx: int = None, tensor: bool = True) ndarray[source]

Predicts the output for the given features.

Parameters:
  • features (NumericalData) – Input features.

  • requires_grad (bool, optional) – Whether to compute gradients. Defaults to True.

  • start_idx (int, optional) – Start index for prediction. Defaults to 0.

  • stop_idx (int, optional) – Stop index for prediction. Defaults to None.

  • tensor (bool, optional) – Whether to return a tensor. Defaults to True.

Returns:

The predicted output.

Return type:

np.ndarray

print_ensemble_metadata()[source]

Prints the metadata of the ensemble.

print_tree(tree_idx: int) None[source]

Prints the tree at the given index.

Parameters:

tree_idx (int) – The index of the tree to print.

reset() None[source]

Resets the learner to its initial state, reinitializing the C++ model and optimizers.

save(filename: str) None[source]

Saves the model to a file.

Parameters:

filename (str) – The filename to save the model to.

set_bias(bias: ndarray | float) None[source]

Sets the bias of the model.

Parameters:

bias (Union[np.ndarray, float]) – The bias value.

set_device(device: str | device) None[source]

Sets the device the model should run on.

Parameters:

device (Union[str, th.device]) – The device to set.

set_feature_weights(feature_weights: ndarray | float) None[source]

Sets the feature weights of the model.

Parameters:

feature_weights (Union[np.ndarray, float]) – The feature weights.

shap(features: ndarray | Tensor) ndarray[source]

Computes SHAP values for the entire ensemble.

Uses Linear tree shap for each tree in the ensemble (sequentially) Implementation based on - https://github.com/yupbank/linear_tree_shap See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192 :param features: :type features: NumericalData

Returns:

shap values

Return type:

np.ndarray

step(features: ndarray | Tensor | Tuple, grads: ndarray | Tensor) None[source]

Performs a single gradient update step (e.g, adding a single decision tree).

Parameters:
  • features (Union[np.ndarray, th.Tensor, Tuple]) – Input features.

  • grads (NumericalData) – Gradients.

tree_shap(tree_idx: int, features: ndarray | Tensor) ndarray[source]

Computes SHAP values for a single tree.

Implementation based on - https://github.com/yupbank/linear_tree_shap See Linear TreeShap, Yu et al, 2023, https://arxiv.org/pdf/2209.08192 :param tree_idx: tree index :type tree_idx: int :param features: :type features: NumericalData

Returns:

shap values

Return type:

np.ndarray