GradientBoostedTreesLearner
GradientBoostedTreesLearner ¶
GradientBoostedTreesLearner(
label: str,
task: Task = CLASSIFICATION,
*,
weights: Optional[str] = None,
ranking_group: Optional[str] = None,
uplift_treatment: Optional[str] = None,
features: Optional[ColumnDefs] = None,
include_all_columns: bool = False,
max_vocab_count: int = 2000,
min_vocab_frequency: int = 5,
discretize_numerical_columns: bool = False,
num_discretized_numerical_bins: int = 255,
max_num_scanned_rows_to_infer_semantic: int = 100000,
max_num_scanned_rows_to_compute_statistics: int = 100000,
data_spec: Optional[DataSpecification] = None,
extra_training_config: Optional[TrainingConfig] = None,
adapt_subsample_for_maximum_training_duration: bool = False,
allow_na_conditions: bool = False,
apply_link_function: bool = True,
categorical_algorithm: str = "CART",
categorical_set_split_greedy_sampling: float = 0.1,
categorical_set_split_max_num_items: int = -1,
categorical_set_split_min_item_frequency: int = 1,
compute_permutation_variable_importance: bool = False,
cross_entropy_ndcg_truncation: Optional[int] = None,
dart_dropout: Optional[float] = None,
early_stopping: str = "LOSS_INCREASE",
early_stopping_initial_iteration: int = 10,
early_stopping_num_trees_look_ahead: int = 30,
focal_loss_alpha: Optional[float] = None,
focal_loss_gamma: Optional[float] = None,
forest_extraction: str = "MART",
goss_alpha: float = 0.2,
goss_beta: float = 0.1,
growing_strategy: str = "LOCAL",
honest: bool = False,
honest_fixed_separation: bool = False,
honest_ratio_leaf_examples: float = 0.5,
in_split_min_examples_check: bool = True,
keep_non_leaf_label_distribution: bool = True,
l1_regularization: float = 0.0,
l2_categorical_regularization: float = 1.0,
l2_regularization: float = 0.0,
lambda_loss: float = 1.0,
loss: str = "DEFAULT",
max_depth: int = 6,
max_num_nodes: Optional[int] = None,
maximum_model_size_in_memory_in_bytes: float = -1.0,
maximum_training_duration_seconds: float = -1.0,
mhld_oblique_max_num_attributes: Optional[int] = None,
mhld_oblique_sample_attributes: Optional[bool] = None,
min_examples: int = 5,
missing_value_policy: str = "GLOBAL_IMPUTATION",
ndcg_truncation: Optional[int] = None,
num_candidate_attributes: Optional[int] = -1,
num_candidate_attributes_ratio: Optional[float] = None,
num_trees: int = 300,
numerical_vector_sequence_num_examples: int = 1000,
numerical_vector_sequence_num_random_anchors: int = 100,
pure_serving_model: bool = False,
random_seed: int = 123456,
sampling_method: str = "RANDOM",
selective_gradient_boosting_ratio: float = 0.01,
shrinkage: float = 0.1,
sorting_strategy: str = "PRESORT",
sparse_oblique_max_num_features: Optional[int] = None,
sparse_oblique_max_num_projections: Optional[
int
] = None,
sparse_oblique_normalization: Optional[str] = None,
sparse_oblique_num_projections_exponent: Optional[
float
] = None,
sparse_oblique_projection_density_factor: Optional[
float
] = None,
sparse_oblique_weights: Optional[str] = None,
sparse_oblique_weights_integer_maximum: Optional[
int
] = None,
sparse_oblique_weights_integer_minimum: Optional[
int
] = None,
sparse_oblique_weights_power_of_two_max_exponent: Optional[
int
] = None,
sparse_oblique_weights_power_of_two_min_exponent: Optional[
int
] = None,
split_axis: str = "AXIS_ALIGNED",
subsample: float = 1.0,
uplift_min_examples_in_treatment: int = 5,
uplift_split_score: str = "KULLBACK_LEIBLER",
use_hessian_gain: bool = False,
validation_interval_in_trees: int = 1,
validation_ratio: float = 0.1,
workers: Optional[Sequence[str]] = None,
resume_training: bool = False,
resume_training_snapshot_interval_seconds: int = 1800,
working_dir: Optional[str] = None,
num_threads: Optional[int] = None,
tuner: Optional[AbstractTuner] = None,
feature_selector: Optional[
AbstractFeatureSelector
] = None,
explicit_args: Optional[Set[str]] = None
)
Bases: GenericCCLearner
Gradient Boosted Trees learning algorithm.
A Gradient Boosted Trees (GBT), also known as Gradient Boosted Decision Trees (GBDT) or Gradient Boosted Machines (GBM), is a set of shallow decision trees trained sequentially. Each tree is trained to predict and then "correct" for the errors of the previously trained trees (more precisely each tree predict the gradient of the loss relative to the model output).
Usage example:
import ydf
import pandas as pd
dataset = pd.read_csv("project/dataset.csv")
model = ydf.GradientBoostedTreesLearner().train(dataset)
print(model.describe())
Hyperparameters are configured to give reasonable results for typical
datasets. Hyperparameters can also be modified manually (see descriptions)
below or by applying the hyperparameter templates available with
GradientBoostedTreesLearner.hyperparameter_templates()
(see this function's documentation for
details).
Attributes:
Name | Type | Description |
---|---|---|
label |
Label of the dataset. The label column
should not be identified as a feature in the |
|
task |
Task to solve (e.g. Task.CLASSIFICATION, Task.REGRESSION, Task.RANKING, Task.CATEGORICAL_UPLIFT, Task.NUMERICAL_UPLIFT). |
|
weights |
Name of a feature that identifies the weight of each example. If
weights are not specified, unit weights are assumed. The weight column
should not be identified as a feature in the |
|
ranking_group |
Only for |
|
uplift_treatment |
Only for |
|
features |
If None, all columns are used as features. The semantic of the
features is determined automatically. Otherwise, if
include_all_columns=False (default) only the column listed in |
|
include_all_columns |
See |
|
max_vocab_count |
Maximum size of the vocabulary of CATEGORICAL and CATEGORICAL_SET columns stored as strings. If more unique values exist, only the most frequent values are kept, and the remaining values are considered as out-of-vocabulary. |
|
min_vocab_frequency |
Minimum number of occurrence of a value for CATEGORICAL
and CATEGORICAL_SET columns. Value observed less than
|
|
discretize_numerical_columns |
If true, discretize all the numerical columns
before training. Discretized numerical columns are faster to train with,
but they can have a negative impact on the model quality. Using
|
|
num_discretized_numerical_bins |
Number of bins used when disretizing numerical columns. |
|
max_num_scanned_rows_to_infer_semantic |
Number of rows to scan when inferring the column's semantic if it is not explicitly specified. Only used when reading from file, in-memory datasets are always read in full. Setting this to a lower number will speed up dataset reading, but might result in incorrect column semantics. Set to -1 to scan the entire dataset. |
|
max_num_scanned_rows_to_compute_statistics |
Number of rows to scan when computing a column's statistics. Only used when reading from file, in-memory datasets are always read in full. A column's statistics include the dictionary for categorical features and the mean / min / max for numerical features. Setting this to a lower number will speed up dataset reading, but skew statistics in the dataspec, which can hurt model quality (e.g. if an important category of a categorical feature is considered OOV). Set to -1 to scan the entire dataset. |
|
data_spec |
Dataspec to be used (advanced). If a data spec is given,
|
|
extra_training_config |
Training configuration proto (advanced). If set, this training configuration proto is merged with the one implicitely defined by the learner. Can be used to set internal or advanced parameters that are not exposed as constructor arguments. Parameters in extra_training_config have higher priority as the constructor arguments. |
|
adapt_subsample_for_maximum_training_duration |
Control how the maximum training duration (if set) is applied. If false, the training stop when the time is used. If true, the size of the sampled datasets used train individual trees are adapted dynamically so that all the trees are trained in time. Default: False. |
|
allow_na_conditions |
If true, the tree training evaluates conditions of the
type |
|
apply_link_function |
If true, applies the link function (a.k.a. activation function), if any, before returning the model prediction. If false, returns the pre-link function model output. For example, in the case of binary classification, the pre-link function output is a logic while the post-link function is a probability. Default: True. |
|
categorical_algorithm |
How to learn splits on categorical attributes.
- |
|
categorical_set_split_greedy_sampling |
For categorical set splits e.g. texts. Probability for a categorical value to be a candidate for the positive set. The sampling is applied once per node (i.e. not at every step of the greedy optimization). Default: 0.1. |
|
categorical_set_split_max_num_items |
For categorical set splits e.g. texts.
Maximum number of items (prior to the sampling). If more items are
available, the least frequent items are ignored. Changing this value is
similar to change the "max_vocab_count" before loading the dataset, with
the following exception: With |
|
categorical_set_split_min_item_frequency |
For categorical set splits e.g. texts. Minimum number of occurrences of an item to be considered. Default: 1. |
|
compute_permutation_variable_importance |
If true, compute the permutation variable importance of the model at the end of the training using the validation dataset. Enabling this feature can increase the training time significantly. Default: False. |
|
cross_entropy_ndcg_truncation |
Truncation of the cross-entropy NDCG loss
(default 5). Only used with cross-entropy NDCG loss i.e.
|
|
dart_dropout |
Dropout rate applied when using the DART i.e. when forest_extraction=DART. Default: None. |
|
early_stopping |
Early stopping detects the overfitting of the model and
halts it training using the validation dataset. If not provided directly,
the validation dataset is extracted from the training dataset (see
"validation_ratio" parameter):
- |
|
early_stopping_initial_iteration |
0-based index of the first iteration considered for early stopping computation. Increasing this value prevents too early stopping due to noisy initial iterations of the learner. Default: 10. |
|
early_stopping_num_trees_look_ahead |
Rolling number of trees used to detect validation loss increase and trigger early stopping. Default: 30. |
|
focal_loss_alpha |
EXPERIMENTAL, default 0.5. Weighting parameter for focal
loss, positive samples weighted by alpha, negative samples by (1-alpha).
The default 0.5 value means no active class-level weighting. Only used
with focal loss i.e. |
|
focal_loss_gamma |
EXPERIMENTAL, default 2.0. Exponent of the misprediction
exponent term in focal loss, corresponds to gamma parameter in
https://arxiv.org/pdf/1708.02002.pdf. Only used with focal loss i.e.
|
|
forest_extraction |
How to construct the forest: - MART: For Multiple Additive Regression Trees. The "classical" way to build a GBDT i.e. each tree tries to "correct" the mistakes of the previous trees. - DART: For Dropout Additive Regression Trees. A modification of MART proposed in http://proceedings.mlr.press/v38/korlakaivinayak15.pdf. Here, each tree tries to "correct" the mistakes of a random subset of the previous trees. Default: "MART". |
|
goss_alpha |
Alpha parameter for the GOSS (Gradient-based One-Side Sampling; "See LightGBM: A Highly Efficient Gradient Boosting Decision Tree") sampling method. Default: 0.2. |
|
goss_beta |
Beta parameter for the GOSS (Gradient-based One-Side Sampling) sampling method. Default: 0.1. |
|
growing_strategy |
How to grow the tree.
- |
|
honest |
In honest trees, different training examples are used to infer the structure and the leaf values. This regularization technique trades examples for bias estimates. It might increase or reduce the quality of the model. See "Generalized Random Forests", Athey et al. In this paper, Honest trees are trained with the Random Forest algorithm with a sampling without replacement. Default: False. |
|
honest_fixed_separation |
For honest trees only i.e. honest=true. If true, a new random separation is generated for each tree. If false, the same separation is used for all the trees (e.g., in Gradient Boosted Trees containing multiple trees). Default: False. |
|
honest_ratio_leaf_examples |
For honest trees only i.e. honest=true. Ratio of examples used to set the leaf values. Default: 0.5. |
|
in_split_min_examples_check |
Whether to check the |
|
keep_non_leaf_label_distribution |
Whether to keep the node value (i.e. the distribution of the labels of the training examples) of non-leaf nodes. This information is not used during serving, however it can be used for model interpretation as well as hyper parameter tuning. This can take lots of space, sometimes accounting for half of the model size. Default: True. |
|
l1_regularization |
L1 regularization applied to the training loss. Impact the tree structures and lead values. Default: 0.0. |
|
l2_categorical_regularization |
L2 regularization applied to the training loss for categorical features. Impact the tree structures and lead values. Default: 1.0. |
|
l2_regularization |
L2 regularization applied to the training loss for all features except the categorical ones. Default: 0.0. |
|
lambda_loss |
Lambda regularization applied to certain training loss functions. Only for NDCG loss. Default: 1.0. |
|
loss |
The loss optimized by the model. If not specified (DEFAULT) the loss
is selected automatically according to the \"task\" and label
statistics. For example, if task=CLASSIFICATION and the label has two
possible values, the loss will be set to BINOMIAL_LOG_LIKELIHOOD.
Possible values are:
- |
|
max_depth |
Maximum depth of the tree. |
|
max_num_nodes |
Maximum number of nodes in the tree. Set to -1 to disable
this limit. Only available for |
|
maximum_model_size_in_memory_in_bytes |
Limit the size of the model when stored in ram. Different algorithms can enforce this limit differently. Note that when models are compiled into an inference, the size of the inference engine is generally much smaller than the original model. Default: -1.0. |
|
maximum_training_duration_seconds |
Maximum training duration of the model expressed in seconds. Each learning algorithm is free to use this parameter at it sees fit. Enabling maximum training duration makes the model training non-deterministic. Default: -1.0. |
|
mhld_oblique_max_num_attributes |
For MHLD oblique splits i.e.
|
|
mhld_oblique_sample_attributes |
For MHLD oblique splits i.e.
|
|
min_examples |
Minimum number of examples in a node. Default: 5. |
|
missing_value_policy |
Method used to handle missing attribute values.
- |
|
ndcg_truncation |
Truncation of the NDCG loss (default 5). Only used with
NDCG loss i.e. |
|
num_candidate_attributes |
Number of unique valid attributes tested for each
node. An attribute is valid if it has at least a valid split. If
|
|
num_candidate_attributes_ratio |
Ratio of attributes tested at each node. If
set, it is equivalent to |
|
num_trees |
Maximum number of decision trees. The effective number of trained tree can be smaller if early stopping is enabled. Default: 300. |
|
numerical_vector_sequence_num_examples |
For datasets with NUMERICAL_VECTOR_SEQUENCE features (i.e., sequence of fixed-size numerical vectors). Maximum number of examples to use to find splits. A larger value can improve the model quality but takes longer to train. Default: 1000. |
|
numerical_vector_sequence_num_random_anchors |
For datasets with NUMERICAL_VECTOR_SEQUENCE features (i.e., sequence of fixed-size numerical vectors). The number of randomly generated anchor values. A larger value can improve the model quality but takes longer to train. Default: 100. |
|
pure_serving_model |
Clear the model from any information that is not required for model serving. This includes debugging, model interpretation and other meta-data. The size of the serialized model can be reduced significatively (50% model size reduction is common). This parameter has no impact on the quality, serving speed or RAM usage of model serving. Default: False. |
|
random_seed |
Random seed for the training of the model. Learners are expected to be deterministic by the random seed. Default: 123456. |
|
sampling_method |
Control the sampling of the datasets used to train individual trees. - NONE: No sampling is applied. This is equivalent to RANDOM sampling with \"subsample=1\". - RANDOM (default): Uniform random sampling. Automatically selected if "subsample" is set. - GOSS: Gradient-based One-Side Sampling. Automatically selected if "goss_alpha" or "goss_beta" is set. - SELGB: Selective Gradient Boosting. Automatically selected if "selective_gradient_boosting_ratio" is set. Only valid for ranking. Default: "RANDOM". |
|
selective_gradient_boosting_ratio |
Ratio of the dataset used to train individual tree for the selective Gradient Boosting (Selective Gradient Boosting for Effective Learning to Rank; Lucchese et al; http://quickrank.isti.cnr.it/selective-data/selective-SIGIR2018.pdf) sampling method. Default: 0.01. |
|
shrinkage |
Coefficient applied to each tree prediction. A small value (0.02) tends to give more accurate results (assuming enough trees are trained), but results in larger models. Analogous to neural network learning rate. Fixed to 1.0 for DART models. Default: 0.1. |
|
sorting_strategy |
How are sorted the numerical features in order to find the splits - AUTO: Selects the most efficient method among IN_NODE, FORCE_PRESORT, and LAYER. - IN_NODE: The features are sorted just before being used in the node. This solution is slow but consumes little amount of memory. - FORCE_PRESORT: The features are pre-sorted at the start of the training. This solution is faster but consumes much more memory than IN_NODE. - PRESORT: Automatically choose between FORCE_PRESORT and IN_NODE. . Default: "PRESORT". |
|
sparse_oblique_max_num_features |
For sparse oblique splits i.e.
|
|
sparse_oblique_max_num_projections |
For sparse oblique splits i.e.
|
|
sparse_oblique_normalization |
For sparse oblique splits i.e.
|
|
sparse_oblique_num_projections_exponent |
For sparse oblique splits i.e.
|
|
sparse_oblique_projection_density_factor |
Density of the projections as an
exponent of the number of features. Independently for each projection,
each feature has a probability "projection_density_factor / num_features"
to be considered in the projection.
The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020)
calls this parameter |
|
sparse_oblique_weights |
For sparse oblique splits i.e.
Possible values:
- |
|
sparse_oblique_weights_integer_maximum |
For sparse oblique splits i.e.
|
|
sparse_oblique_weights_integer_minimum |
For sparse oblique splits i.e.
|
|
sparse_oblique_weights_power_of_two_max_exponent |
For sparse oblique splits
i.e. |
|
sparse_oblique_weights_power_of_two_min_exponent |
For sparse oblique splits
i.e. |
|
split_axis |
What structure of split to consider for numerical features.
- |
|
subsample |
Ratio of the dataset (sampling without replacement) used to train individual trees for the random sampling method. If \"subsample\" is set and if \"sampling_method\" is NOT set or set to \"NONE\", then \"sampling_method\" is implicitly set to \"RANDOM\". In other words, to enable random subsampling, you only need to set "\"subsample\". Default: 1.0. |
|
uplift_min_examples_in_treatment |
For uplift models only. Minimum number of examples per treatment in a node. Default: 5. |
|
uplift_split_score |
For uplift models only. Splitter score i.e. score
optimized by the splitters. The scores are introduced in "Decision trees
for uplift modeling with single and multiple treatments", Rzepakowski et
al. Notation: |
|
use_hessian_gain |
Use true, uses a formulation of split gain with a hessian term i.e. optimizes the splits to minimize the variance of "gradient / hessian. Available for all losses except regression. Default: False. |
|
validation_interval_in_trees |
Evaluate the model on the validation set every "validation_interval_in_trees" trees. Increasing this value reduce the cost of validation and can impact the early stopping policy (as early stopping is only tested during the validation). Default: 1. |
|
validation_ratio |
Fraction of the training dataset used for validation if not validation dataset is provided. The validation dataset, whether provided directly or extracted from the training dataset, is used to compute the validation loss, other validation metrics, and possibly trigger early stopping (if enabled). When early stopping is disabled, the validation dataset is only used for monitoring and does not influence the model directly. If the "validation_ratio" is set to 0, early stopping is disabled (i.e., it implies setting early_stopping=NONE). Default: 0.1. |
|
workers |
If set, enable distributed training. "workers" is the list of IP
addresses of the workers. A worker is a process running
|
|
resume_training |
If true, the model training resumes from the checkpoint
stored in the |
|
resume_training_snapshot_interval_seconds |
Indicative number of seconds in
between snapshots when |
|
working_dir |
Path to a directory available for the learning algorithm to store intermediate computation results. Depending on the learning algorithm and parameters, the working_dir might be optional, required, or ignored. For instance, distributed training algorithm always need a "working_dir", and the gradient boosted tree and hyper-parameter tuners will export artefacts to the "working_dir" if provided. |
|
num_threads |
Number of threads used to train the model. Different learning
algorithms use multi-threading differently and with different degree of
efficiency. If |
|
tuner |
If set, automatically select the best hyperparameters using the provided tuner. When using distributed training, the tuning is distributed. |
|
feature_selector |
If set, automatically select the input features of the model using automated feature selection using the specified feature selector. |
|
explicit_args |
Helper argument for internal use. Throws if supplied explicitly by the user. |
hyperparameters
property
¶
A (mutable) dictionary of this learner's hyperparameters.
This object can be used to inspect or modify hyperparameters after creating
the learner. Modifying hyperparameters after constructing the learner is
suitable for some advanced use cases. Since this approach bypasses some
feasibility checks for the given set of hyperparameters, it generally better
to re-create the learner for each model. The current set of hyperparameters
can be validated manually with validate_hyperparameters()
.
cross_validation ¶
cross_validation(
ds: InputDataset,
folds: int = 10,
bootstrapping: Union[bool, int] = False,
parallel_evaluations: int = 1,
) -> Evaluation
hyperparameter_templates
classmethod
¶
Hyperparameter templates for this Learner.
Hyperparameter templates are sets of pre-defined hyperparameters for easy access to different variants of the learner. Each template is a mapping to a set of hyperparameters and can be applied directly on the learner.
Usage example:
templates = ydf.GradientBoostedTreesLearner.hyperparameter_templates()
better_defaultv1 = templates["better_defaultv1"]
# Print a description of the template
print(better_defaultv1.description)
# Apply the template's settings on the learner.
learner = ydf.GradientBoostedTreesLearner(label, **better_defaultv1)
Returns:
Type | Description |
---|---|
Dict[str, HyperparameterTemplate]
|
Dictionary of the available templates |
train ¶
train(
ds: InputDataset,
valid: Optional[InputDataset] = None,
verbose: Optional[Union[int, bool]] = None,
) -> GradientBoostedTreesModel
Trains a model on the given dataset.
Options for dataset reading are given on the learner. Consult the documentation of the learner or ydf.create_vertical_dataset() for additional information on dataset reading in YDF.
Usage example:
import ydf
import pandas as pd
train_ds = pd.read_csv(...)
learner = ydf.GradientBoostedTreesLearner(label="label")
model = learner.train(train_ds)
print(model.summary())
If training is interrupted (for example, by interrupting the cell execution in Colab), the model will be returned to the state it was in at the moment of interruption.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds
|
InputDataset
|
Training dataset. |
required |
valid
|
Optional[InputDataset]
|
Optional validation dataset. Some learners, such as Random Forest, do not need validation dataset. Some learners, such as GradientBoostedTrees, automatically extract a validation dataset from the training dataset if the validation dataset is not provided. |
None
|
verbose
|
Optional[Union[int, bool]]
|
Verbose level during training. If None, uses the global verbose
level of |
None
|
Returns:
Type | Description |
---|---|
GradientBoostedTreesModel
|
A trained model. |