Skip to content

RandomForestLearner

RandomForestLearner

RandomForestLearner(label: str, task: Task = generic_learner.Task.CLASSIFICATION, weights: Optional[str] = None, ranking_group: Optional[str] = None, uplift_treatment: Optional[str] = None, features: ColumnDefs = None, include_all_columns: bool = False, max_vocab_count: int = 2000, min_vocab_frequency: int = 5, discretize_numerical_columns: bool = False, num_discretized_numerical_bins: int = 255, max_num_scanned_rows_to_infer_semantic: int = 10000, max_num_scanned_rows_to_compute_statistics: int = 10000, data_spec: Optional[DataSpecification] = None, adapt_bootstrap_size_ratio_for_maximum_training_duration: Optional[bool] = False, allow_na_conditions: Optional[bool] = False, bootstrap_size_ratio: Optional[float] = 1.0, bootstrap_training_dataset: Optional[bool] = True, categorical_algorithm: Optional[str] = 'CART', categorical_set_split_greedy_sampling: Optional[float] = 0.1, categorical_set_split_max_num_items: Optional[int] = -1, categorical_set_split_min_item_frequency: Optional[int] = 1, compute_oob_performances: Optional[bool] = True, compute_oob_variable_importances: Optional[bool] = False, growing_strategy: Optional[str] = 'LOCAL', honest: Optional[bool] = False, honest_fixed_separation: Optional[bool] = False, honest_ratio_leaf_examples: Optional[float] = 0.5, in_split_min_examples_check: Optional[bool] = True, keep_non_leaf_label_distribution: Optional[bool] = True, max_depth: Optional[int] = 16, max_num_nodes: Optional[int] = None, maximum_model_size_in_memory_in_bytes: Optional[float] = -1.0, maximum_training_duration_seconds: Optional[float] = -1.0, mhld_oblique_max_num_attributes: Optional[int] = None, mhld_oblique_sample_attributes: Optional[bool] = None, min_examples: Optional[int] = 5, missing_value_policy: Optional[str] = 'GLOBAL_IMPUTATION', num_candidate_attributes: Optional[int] = 0, num_candidate_attributes_ratio: Optional[float] = -1.0, num_oob_variable_importances_permutations: Optional[int] = 1, num_trees: Optional[int] = 300, pure_serving_model: Optional[bool] = False, random_seed: Optional[int] = 123456, sampling_with_replacement: Optional[bool] = True, sorting_strategy: Optional[str] = 'PRESORT', sparse_oblique_max_num_projections: Optional[int] = None, sparse_oblique_normalization: Optional[str] = None, sparse_oblique_num_projections_exponent: Optional[float] = None, sparse_oblique_projection_density_factor: Optional[float] = None, sparse_oblique_weights: Optional[str] = None, split_axis: Optional[str] = 'AXIS_ALIGNED', uplift_min_examples_in_treatment: Optional[int] = 5, uplift_split_score: Optional[str] = 'KULLBACK_LEIBLER', winner_take_all: Optional[bool] = True, num_threads: Optional[int] = None, working_dir: Optional[str] = None, resume_training: bool = False, resume_training_snapshot_interval_seconds: int = 1800, tuner: Optional[AbstractTuner] = None, workers: Optional[Sequence[str]] = None)

Bases: GenericLearner

Random Forest learning algorithm.

A Random Forest (https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) is a collection of deep CART decision trees trained independently and without pruning. Each tree is trained on a random subset of the original training dataset (sampled with replacement).

The algorithm is unique in that it is robust to overfitting, even in extreme cases e.g. when there are more features than training examples.

It is probably the most well-known of the Decision Forest training algorithms.

Usage example:

import ydf
import pandas as pd

dataset = pd.read_csv("project/dataset.csv")

model = ydf.RandomForestLearner().train(dataset)

print(model.summary())

Hyperparameters are configured to give reasonable results for typical datasets. Hyperparameters can also be modified manually (see descriptions) below or by applying the hyperparameter templates available with RandomForestLearner.hyperparameter_templates() (see this function's documentation for details).

Attributes:

Name Type Description
label

Label of the dataset. The label column should not be identified as a feature in the features parameter.

task

Task to solve (e.g. Task.CLASSIFICATION, Task.REGRESSION, Task.RANKING, Task.CATEGORICAL_UPLIFT, Task.NUMERICAL_UPLIFT).

weights

Name of a feature that identifies the weight of each example. If weights are not specified, unit weights are assumed. The weight column should not be identified as a feature in the features parameter.

ranking_group

Only for task=Task.RANKING. Name of a feature that identifies queries in a query/document ranking task. The ranking group should not be identified as a feature in the features parameter.

uplift_treatment

Only for task=Task.CATEGORICAL_UPLIFT and task=Task. NUMERICAL_UPLIFT. Name of a numerical feature that identifies the treatment in an uplift problem. The value 0 is reserved for the control treatment. Currently, only 0/1 binary treatments are supported.

features

If None, all columns are used as features. The semantic of the features is determined automatically. Otherwise, if include_all_columns=False (default) only the column listed in features are imported. If include_all_columns=True, all the columns are imported as features and only the semantic of the columns NOT in columns is determined automatically. If specified, defines the order of the features - any non-listed features are appended in-order after the specified features (if include_all_columns=True). The label, weights, uplift treatment and ranking_group columns should not be specified as features.

include_all_columns

See features.

max_vocab_count

Maximum size of the vocabulary of CATEGORICAL and CATEGORICAL_SET columns stored as strings. If more unique values exist, only the most frequent values are kept, and the remaining values are considered as out-of-vocabulary.

min_vocab_frequency

Minimum number of occurrence of a value for CATEGORICAL and CATEGORICAL_SET columns. Value observed less than min_vocab_frequency are considered as out-of-vocabulary.

discretize_numerical_columns

If true, discretize all the numerical columns before training. Discretized numerical columns are faster to train with, but they can have a negative impact on the model quality. Using discretize_numerical_columns=True is equivalent as setting the column semantic DISCRETIZED_NUMERICAL in the column argument. See the definition of DISCRETIZED_NUMERICAL for more details.

num_discretized_numerical_bins

Number of bins used when disretizing numerical columns.

max_num_scanned_rows_to_infer_semantic

Number of rows to scan when inferring the column's semantic if it is not explicitly specified. Only used when reading from file, in-memory datasets are always read in full. Setting this to a lower number will speed up dataset reading, but might result in incorrect column semantics. Set to -1 to scan the entire dataset.

max_num_scanned_rows_to_compute_statistics

Number of rows to scan when computing a column's statistics. Only used when reading from file, in-memory datasets are always read in full. A column's statistics include the dictionary for categorical features and the mean / min / max for numerical features. Setting this to a lower number will speed up dataset reading, but skew statistics in the dataspec, which can hurt model quality (e.g. if an important category of a categorical feature is considered OOV). Set to -1 to scan the entire dataset.

data_spec

Dataspec to be used (advanced). If a data spec is given, columns, include_all_columns, max_vocab_count, min_vocab_frequency, discretize_numerical_columns and num_discretized_numerical_bins will be ignored.

adapt_bootstrap_size_ratio_for_maximum_training_duration

Control how the maximum training duration (if set) is applied. If false, the training stop when the time is used. If true, adapts the size of the sampled dataset used to train each tree such that num_trees will train within maximum_training_duration. Has no effect if there is no maximum training duration specified. Default: False.

allow_na_conditions

If true, the tree training evaluates conditions of the type X is NA i.e. X is missing. Default: False.

bootstrap_size_ratio

Number of examples used to train each trees; expressed as a ratio of the training dataset size. Default: 1.0.

bootstrap_training_dataset

If true (default), each tree is trained on a separate dataset sampled with replacement from the original dataset. If false, all the trees are trained on the entire same dataset. If bootstrap_training_dataset:false, OOB metrics are not available. bootstrap_training_dataset=false is used in "Extremely randomized trees" (https://link.springer.com/content/pdf/10.1007%2Fs10994-006-6226-1.pdf). Default: True.

categorical_algorithm

How to learn splits on categorical attributes. - CART: CART algorithm. Find categorical splits of the form "value \in mask". The solution is exact for binary classification, regression and ranking. It is approximated for multi-class classification. This is a good first algorithm to use. In case of overfitting (very small dataset, large dictionary), the "random" algorithm is a good alternative. - ONE_HOT: One-hot encoding. Find the optimal categorical split of the form "attribute == param". This method is similar (but more efficient) than converting converting each possible categorical value into a boolean feature. This method is available for comparison purpose and generally performs worse than other alternatives. - RANDOM: Best splits among a set of random candidate. Find the a categorical split of the form "value \in mask" using a random search. This solution can be seen as an approximation of the CART algorithm. This method is a strong alternative to CART. This algorithm is inspired from section "5.1 Categorical Variables" of "Random Forest", 2001. Default: "CART".

categorical_set_split_greedy_sampling

For categorical set splits e.g. texts. Probability for a categorical value to be a candidate for the positive set. The sampling is applied once per node (i.e. not at every step of the greedy optimization). Default: 0.1.

categorical_set_split_max_num_items

For categorical set splits e.g. texts. Maximum number of items (prior to the sampling). If more items are available, the least frequent items are ignored. Changing this value is similar to change the "max_vocab_count" before loading the dataset, with the following exception: With max_vocab_count, all the remaining items are grouped in a special Out-of-vocabulary item. With max_num_items, this is not the case. Default: -1.

categorical_set_split_min_item_frequency

For categorical set splits e.g. texts. Minimum number of occurrences of an item to be considered. Default: 1.

compute_oob_performances

If true, compute the Out-of-bag evaluation (then available in the summary and model inspector). This evaluation is a cheap alternative to cross-validation evaluation. Default: True.

compute_oob_variable_importances

If true, compute the Out-of-bag feature importance (then available in the summary and model inspector). Note that the OOB feature importance can be expensive to compute. Default: False.

growing_strategy

How to grow the tree. - LOCAL: Each node is split independently of the other nodes. In other words, as long as a node satisfy the splits "constraints (e.g. maximum depth, minimum number of observations), the node will be split. This is the "classical" way to grow decision trees. - BEST_FIRST_GLOBAL: The node with the best loss reduction among all the nodes of the tree is selected for splitting. This method is also called "best first" or "leaf-wise growth". See "Best-first decision tree learning", Shi and "Additive logistic regression : A statistical view of boosting", Friedman for more details. Default: "LOCAL".

honest

In honest trees, different training examples are used to infer the structure and the leaf values. This regularization technique trades examples for bias estimates. It might increase or reduce the quality of the model. See "Generalized Random Forests", Athey et al. In this paper, Honest trees are trained with the Random Forest algorithm with a sampling without replacement. Default: False.

honest_fixed_separation

For honest trees only i.e. honest=true. If true, a new random separation is generated for each tree. If false, the same separation is used for all the trees (e.g., in Gradient Boosted Trees containing multiple trees). Default: False.

honest_ratio_leaf_examples

For honest trees only i.e. honest=true. Ratio of examples used to set the leaf values. Default: 0.5.

in_split_min_examples_check

Whether to check the min_examples constraint in the split search (i.e. splits leading to one child having less than min_examples examples are considered invalid) or before the split search (i.e. a node can be derived only if it contains more than min_examples examples). If false, there can be nodes with less than min_examples training examples. Default: True.

keep_non_leaf_label_distribution

Whether to keep the node value (i.e. the distribution of the labels of the training examples) of non-leaf nodes. This information is not used during serving, however it can be used for model interpretation as well as hyper parameter tuning. This can take lots of space, sometimes accounting for half of the model size. Default: True.

max_depth

Maximum depth of the tree. max_depth=1 means that all trees will be roots. max_depth=-1 means that tree depth is not restricted by this parameter. Values <= -2 will be ignored. Default: 16.

max_num_nodes

Maximum number of nodes in the tree. Set to -1 to disable this limit. Only available for growing_strategy=BEST_FIRST_GLOBAL. Default: None.

maximum_model_size_in_memory_in_bytes

Limit the size of the model when stored in ram. Different algorithms can enforce this limit differently. Note that when models are compiled into an inference, the size of the inference engine is generally much smaller than the original model. Default: -1.0.

maximum_training_duration_seconds

Maximum training duration of the model expressed in seconds. Each learning algorithm is free to use this parameter at it sees fit. Enabling maximum training duration makes the model training non-deterministic. Default: -1.0.

mhld_oblique_max_num_attributes

For MHLD oblique splits i.e. split_axis=MHLD_OBLIQUE. Maximum number of attributes in the projection. Increasing this value increases the training time. Decreasing this value acts as a regularization. The value should be in [2, num_numerical_features]. If the value is above the total number of numerical features, the value is capped automatically. The value 1 is allowed but results in ordinary (non-oblique) splits. Default: None.

mhld_oblique_sample_attributes

For MHLD oblique splits i.e. split_axis=MHLD_OBLIQUE. If true, applies the attribute sampling controlled by the "num_candidate_attributes" or "num_candidate_attributes_ratio" parameters. If false, all the attributes are tested. Default: None.

min_examples

Minimum number of examples in a node. Default: 5.

missing_value_policy

Method used to handle missing attribute values. - GLOBAL_IMPUTATION: Missing attribute values are imputed, with the mean (in case of numerical attribute) or the most-frequent-item (in case of categorical attribute) computed on the entire dataset (i.e. the information contained in the data spec). - LOCAL_IMPUTATION: Missing attribute values are imputed with the mean (numerical attribute) or most-frequent-item (in the case of categorical attribute) evaluated on the training examples in the current node. - RANDOM_LOCAL_IMPUTATION: Missing attribute values are imputed from randomly sampled values from the training examples in the current node. This method was proposed by Clinic et al. in "Random Survival Forests" (https://projecteuclid.org/download/pdfview_1/euclid.aoas/1223908043). Default: "GLOBAL_IMPUTATION".

num_candidate_attributes

Number of unique valid attributes tested for each node. An attribute is valid if it has at least a valid split. If num_candidate_attributes=0, the value is set to the classical default value for Random Forest: sqrt(number of input attributes) in case of classification and number_of_input_attributes / 3 in case of regression. If num_candidate_attributes=-1, all the attributes are tested. Default: 0.

num_candidate_attributes_ratio

Ratio of attributes tested at each node. If set, it is equivalent to num_candidate_attributes = number_of_input_features x num_candidate_attributes_ratio. The possible values are between ]0, and 1] as well as -1. If not set or equal to -1, the num_candidate_attributes is used. Default: -1.0.

num_oob_variable_importances_permutations

Number of time the dataset is re-shuffled to compute the permutation variable importances. Increasing this value increase the training time (if "compute_oob_variable_importances:true") as well as the stability of the oob variable importance metrics. Default: 1.

num_trees

Number of individual decision trees. Increasing the number of trees can increase the quality of the model at the expense of size, training speed, and inference latency. Default: 300.

pure_serving_model

Clear the model from any information that is not required for model serving. This includes debugging, model interpretation and other meta-data. The size of the serialized model can be reduced significatively (50% model size reduction is common). This parameter has no impact on the quality, serving speed or RAM usage of model serving. Default: False.

random_seed

Random seed for the training of the model. Learners are expected to be deterministic by the random seed. Default: 123456.

sampling_with_replacement

If true, the training examples are sampled with replacement. If false, the training samples are sampled without replacement. Only used when "bootstrap_training_dataset=true". If false (sampling without replacement) and if "bootstrap_size_ratio=1" (default), all the examples are used to train all the trees (you probably do not want that). Default: True.

sorting_strategy

How are sorted the numerical features in order to find the splits - PRESORT: The features are pre-sorted at the start of the training. This solution is faster but consumes much more memory than IN_NODE. - IN_NODE: The features are sorted just before being used in the node. This solution is slow but consumes little amount of memory. . Default: "PRESORT".

sparse_oblique_max_num_projections

For sparse oblique splits i.e. split_axis=SPARSE_OBLIQUE. Maximum number of projections (applied after the num_projections_exponent). Oblique splits try out max(p^num_projections_exponent, max_num_projections) random projections for choosing a split, where p is the number of numerical features. Increasing "max_num_projections" increases the training time but not the inference time. In late stage model development, if every bit of accuracy if important, increase this value. The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020) does not define this hyperparameter. Default: None.

sparse_oblique_normalization

For sparse oblique splits i.e. split_axis=SPARSE_OBLIQUE. Normalization applied on the features, before applying the sparse oblique projections. - NONE: No normalization. - STANDARD_DEVIATION: Normalize the feature by the estimated standard deviation on the entire train dataset. Also known as Z-Score normalization. - MIN_MAX: Normalize the feature by the range (i.e. max-min) estimated on the entire train dataset. Default: None.

sparse_oblique_num_projections_exponent

For sparse oblique splits i.e. split_axis=SPARSE_OBLIQUE. Controls of the number of random projections to test at each node. Increasing this value very likely improves the quality of the model, drastically increases the training time, and doe not impact the inference time. Oblique splits try out max(p^num_projections_exponent, max_num_projections) random projections for choosing a split, where p is the number of numerical features. Therefore, increasing this num_projections_exponent and possibly max_num_projections may improve model quality, but will also significantly increase training time. Note that the complexity of (classic) Random Forests is roughly proportional to num_projections_exponent=0.5, since it considers sqrt(num_features) for a split. The complexity of (classic) GBDT is roughly proportional to num_projections_exponent=1, since it considers all features for a split. The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020) recommends values in [1/4, 2]. Default: None.

sparse_oblique_projection_density_factor

Density of the projections as an exponent of the number of features. Independently for each projection, each feature has a probability "projection_density_factor / num_features" to be considered in the projection. The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020) calls this parameter lambda and recommends values in [1, 5]. Increasing this value increases training and inference time (on average). This value is best tuned for each dataset. Default: None.

sparse_oblique_weights

For sparse oblique splits i.e. split_axis=SPARSE_OBLIQUE. Possible values: - BINARY: The oblique weights are sampled in {-1,1} (default). - CONTINUOUS: The oblique weights are be sampled in [-1,1]. Default: None.

split_axis

What structure of split to consider for numerical features. - AXIS_ALIGNED: Axis aligned splits (i.e. one condition at a time). This is the "classical" way to train a tree. Default value. - SPARSE_OBLIQUE: Sparse oblique splits (i.e. random splits one a small number of features) from "Sparse Projection Oblique Random Forests", Tomita et al., 2020. - MHLD_OBLIQUE: Multi-class Hellinger Linear Discriminant splits from "Classification Based on Multivariate Contrast Patterns", Canete-Sifuentes et al., 2029 Default: "AXIS_ALIGNED".

uplift_min_examples_in_treatment

For uplift models only. Minimum number of examples per treatment in a node. Default: 5.

uplift_split_score

For uplift models only. Splitter score i.e. score optimized by the splitters. The scores are introduced in "Decision trees for uplift modeling with single and multiple treatments", Rzepakowski et al. Notation: p probability / average value of the positive outcome, q probability / average value in the control group. - KULLBACK_LEIBLER or KL: - p log (p/q) - EUCLIDEAN_DISTANCE or ED: (p-q)^2 - CHI_SQUARED or CS: (p-q)^2/q Default: "KULLBACK_LEIBLER".

winner_take_all

Control how classification trees vote. If true, each tree votes for one class. If false, each tree vote for a distribution of classes. winner_take_all_inference=false is often preferable. Default: True.

num_threads

Number of threads used to train the model. Different learning algorithms use multi-threading differently and with different degree of efficiency. If None, num_threads will be automatically set to the number of processors (up to a maximum of 32; or set to 6 if the number of processors is not available). Making num_threads significantly larger than the number of processors can slow-down the training speed. The default value logic might change in the future.

resume_training

If true, the model training resumes from the checkpoint stored in the working_dir directory. If working_dir does not contain any model checkpoint, the training starts from the beginning. Resuming training is useful in the following situations: (1) The training was interrupted by the user (e.g. ctrl+c or "stop" button in a notebook) or rescheduled, or (2) the hyper-parameter of the learner was changed e.g. increasing the number of trees.

working_dir

Path to a directory available for the learning algorithm to store intermediate computation results. Depending on the learning algorithm and parameters, the working_dir might be optional, required, or ignored. For instance, distributed training algorithm always need a "working_dir", and the gradient boosted tree and hyper-parameter tuners will export artefacts to the "working_dir" if provided.

resume_training_snapshot_interval_seconds

Indicative number of seconds in between snapshots when resume_training=True. Might be ignored by some learners.

tuner

If set, automatically select the best hyperparameters using the provided tuner. When using distributed training, the tuning is distributed.

workers

If set, enable distributed training. "workers" is the list of IP addresses of the workers. A worker is a process running ydf.start_worker(port).

cross_validation

cross_validation(ds: InputDataset, folds: int = 10, bootstrapping: Union[bool, int] = False, parallel_evaluations: int = 1) -> Evaluation

Cross-validates the learner and return the evaluation.

Usage example:

import pandas as pd
import ydf

dataset = pd.read_csv("my_dataset.csv")
learner = ydf.RandomForestLearner(label="label")
evaluation = learner.cross_validation(dataset)

# In a notebook, display an interractive evaluation
evaluation

# Print the evaluation
print(evaluation)

# Look at specific metrics
print(evaluation.accuracy)

Parameters:

Name Type Description Default
ds InputDataset

Dataset for the cross-validation.

required
folds int

Number of cross-validation folds.

10
bootstrapping Union[bool, int]

Controls whether bootstrapping is used to evaluate the confidence intervals and statistical tests (i.e., all the metrics ending with "[B]"). If set to false, bootstrapping is disabled. If set to true, bootstrapping is enabled and 2000 bootstrapping samples are used. If set to an integer, it specifies the number of bootstrapping samples to use. In this case, if the number is less than 100, an error is raised as bootstrapping will not yield useful results.

False
parallel_evaluations int

Number of model to train and evaluate in parallel using multi-threading. Note that each model is potentially already trained with multithreading (see num_threads argument of Learner constructor).

1

Returns:

Type Description
Evaluation

The cross-validation evaluation.

hyperparameter_templates classmethod

hyperparameter_templates() -> Dict[str, HyperparameterTemplate]

Hyperparameter templates for this Learner.

Hyperparameter templates are sets of pre-defined hyperparameters for easy access to different variants of the learner. Each template is a mapping to a set of hyperparameters and can be applied directly on the learner.

Usage example:

templates = ydf.RandomForestLearner.hyperparameter_templates()
better_defaultv1 = templates["better_defaultv1"]
# Print a description of the template
print(better_defaultv1.description)
# Apply the template's settings on the learner.
learner = ydf.RandomForestLearner(label, **better_defaultv1)

Returns:

Type Description
Dict[str, HyperparameterTemplate]

Dictionary of the available templates

train

train(ds: InputDataset, valid: Optional[InputDataset] = None) -> RandomForestModel

Trains a model on the given dataset.

Options for dataset reading are given on the learner. Consult the documentation of the learner or ydf.create_vertical_dataset() for additional information on dataset reading in YDF.

Usage example:

import ydf
import pandas as pd

train_ds = pd.read_csv(...)

learner = ydf.RandomForestLearner(label="label")
model = learner.train(train_ds)
print(model.summary())

If training is interrupted (for example, by interrupting the cell execution in Colab), the model will be returned to the state it was in at the moment of interruption.

Parameters:

Name Type Description Default
ds InputDataset

Training dataset.

required
valid Optional[InputDataset]

Optional validation dataset. Some learners, such as Random Forest, do not need validation dataset. Some learners, such as GradientBoostedTrees, automatically extract a validation dataset from the training dataset if the validation dataset is not provided.

None

Returns:

Type Description
RandomForestModel

A trained model.