CartLearner
CartLearner
CartLearner(label: str, task: Task = generic_learner.Task.CLASSIFICATION, weights: Optional[str] = None, ranking_group: Optional[str] = None, uplift_treatment: Optional[str] = None, features: ColumnDefs = None, include_all_columns: bool = False, max_vocab_count: int = 2000, min_vocab_frequency: int = 5, discretize_numerical_columns: bool = False, num_discretized_numerical_bins: int = 255, max_num_scanned_rows_to_infer_semantic: int = 10000, max_num_scanned_rows_to_compute_statistics: int = 10000, data_spec: Optional[DataSpecification] = None, allow_na_conditions: Optional[bool] = False, categorical_algorithm: Optional[str] = 'CART', categorical_set_split_greedy_sampling: Optional[float] = 0.1, categorical_set_split_max_num_items: Optional[int] = -1, categorical_set_split_min_item_frequency: Optional[int] = 1, growing_strategy: Optional[str] = 'LOCAL', honest: Optional[bool] = False, honest_fixed_separation: Optional[bool] = False, honest_ratio_leaf_examples: Optional[float] = 0.5, in_split_min_examples_check: Optional[bool] = True, keep_non_leaf_label_distribution: Optional[bool] = True, max_depth: Optional[int] = 16, max_num_nodes: Optional[int] = None, maximum_model_size_in_memory_in_bytes: Optional[float] = -1.0, maximum_training_duration_seconds: Optional[float] = -1.0, mhld_oblique_max_num_attributes: Optional[int] = None, mhld_oblique_sample_attributes: Optional[bool] = None, min_examples: Optional[int] = 5, missing_value_policy: Optional[str] = 'GLOBAL_IMPUTATION', num_candidate_attributes: Optional[int] = 0, num_candidate_attributes_ratio: Optional[float] = -1.0, pure_serving_model: Optional[bool] = False, random_seed: Optional[int] = 123456, sorting_strategy: Optional[str] = 'PRESORT', sparse_oblique_max_num_projections: Optional[int] = None, sparse_oblique_normalization: Optional[str] = None, sparse_oblique_num_projections_exponent: Optional[float] = None, sparse_oblique_projection_density_factor: Optional[float] = None, sparse_oblique_weights: Optional[str] = None, split_axis: Optional[str] = 'AXIS_ALIGNED', uplift_min_examples_in_treatment: Optional[int] = 5, uplift_split_score: Optional[str] = 'KULLBACK_LEIBLER', validation_ratio: Optional[float] = 0.1, num_threads: Optional[int] = None, working_dir: Optional[str] = None, resume_training: bool = False, resume_training_snapshot_interval_seconds: int = 1800, tuner: Optional[AbstractTuner] = None, workers: Optional[Sequence[str]] = None)
Bases: GenericLearner
Cart learning algorithm.
A CART (Classification and Regression Trees) a decision tree. The non-leaf nodes contains conditions (also known as splits) while the leaf nodes contain prediction values. The training dataset is divided in two parts. The first is used to grow the tree while the second is used to prune the tree.
Usage example:
import ydf
import pandas as pd
dataset = pd.read_csv("project/dataset.csv")
model = ydf.CartLearner().train(dataset)
print(model.summary())
Hyperparameters are configured to give reasonable results for typical
datasets. Hyperparameters can also be modified manually (see descriptions)
below or by applying the hyperparameter templates available with
CartLearner.hyperparameter_templates()
(see this function's documentation for
details).
Attributes:
Name | Type | Description |
---|---|---|
label |
Label of the dataset. The label column
should not be identified as a feature in the |
|
task |
Task to solve (e.g. Task.CLASSIFICATION, Task.REGRESSION, Task.RANKING, Task.CATEGORICAL_UPLIFT, Task.NUMERICAL_UPLIFT). |
|
weights |
Name of a feature that identifies the weight of each example. If
weights are not specified, unit weights are assumed. The weight column
should not be identified as a feature in the |
|
ranking_group |
Only for |
|
uplift_treatment |
Only for |
|
features |
If None, all columns are used as features. The semantic of the
features is determined automatically. Otherwise, if
include_all_columns=False (default) only the column listed in |
|
include_all_columns |
See |
|
max_vocab_count |
Maximum size of the vocabulary of CATEGORICAL and CATEGORICAL_SET columns stored as strings. If more unique values exist, only the most frequent values are kept, and the remaining values are considered as out-of-vocabulary. |
|
min_vocab_frequency |
Minimum number of occurrence of a value for CATEGORICAL
and CATEGORICAL_SET columns. Value observed less than
|
|
discretize_numerical_columns |
If true, discretize all the numerical columns
before training. Discretized numerical columns are faster to train with,
but they can have a negative impact on the model quality. Using
|
|
num_discretized_numerical_bins |
Number of bins used when disretizing numerical columns. |
|
max_num_scanned_rows_to_infer_semantic |
Number of rows to scan when inferring the column's semantic if it is not explicitly specified. Only used when reading from file, in-memory datasets are always read in full. Setting this to a lower number will speed up dataset reading, but might result in incorrect column semantics. Set to -1 to scan the entire dataset. |
|
max_num_scanned_rows_to_compute_statistics |
Number of rows to scan when computing a column's statistics. Only used when reading from file, in-memory datasets are always read in full. A column's statistics include the dictionary for categorical features and the mean / min / max for numerical features. Setting this to a lower number will speed up dataset reading, but skew statistics in the dataspec, which can hurt model quality (e.g. if an important category of a categorical feature is considered OOV). Set to -1 to scan the entire dataset. |
|
data_spec |
Dataspec to be used (advanced). If a data spec is given,
|
|
allow_na_conditions |
If true, the tree training evaluates conditions of the
type |
|
categorical_algorithm |
How to learn splits on categorical attributes.
- |
|
categorical_set_split_greedy_sampling |
For categorical set splits e.g. texts. Probability for a categorical value to be a candidate for the positive set. The sampling is applied once per node (i.e. not at every step of the greedy optimization). Default: 0.1. |
|
categorical_set_split_max_num_items |
For categorical set splits e.g. texts.
Maximum number of items (prior to the sampling). If more items are
available, the least frequent items are ignored. Changing this value is
similar to change the "max_vocab_count" before loading the dataset, with
the following exception: With |
|
categorical_set_split_min_item_frequency |
For categorical set splits e.g. texts. Minimum number of occurrences of an item to be considered. Default: 1. |
|
growing_strategy |
How to grow the tree.
- |
|
honest |
In honest trees, different training examples are used to infer the structure and the leaf values. This regularization technique trades examples for bias estimates. It might increase or reduce the quality of the model. See "Generalized Random Forests", Athey et al. In this paper, Honest trees are trained with the Random Forest algorithm with a sampling without replacement. Default: False. |
|
honest_fixed_separation |
For honest trees only i.e. honest=true. If true, a new random separation is generated for each tree. If false, the same separation is used for all the trees (e.g., in Gradient Boosted Trees containing multiple trees). Default: False. |
|
honest_ratio_leaf_examples |
For honest trees only i.e. honest=true. Ratio of examples used to set the leaf values. Default: 0.5. |
|
in_split_min_examples_check |
Whether to check the |
|
keep_non_leaf_label_distribution |
Whether to keep the node value (i.e. the distribution of the labels of the training examples) of non-leaf nodes. This information is not used during serving, however it can be used for model interpretation as well as hyper parameter tuning. This can take lots of space, sometimes accounting for half of the model size. Default: True. |
|
max_depth |
Maximum depth of the tree. |
|
max_num_nodes |
Maximum number of nodes in the tree. Set to -1 to disable
this limit. Only available for |
|
maximum_model_size_in_memory_in_bytes |
Limit the size of the model when stored in ram. Different algorithms can enforce this limit differently. Note that when models are compiled into an inference, the size of the inference engine is generally much smaller than the original model. Default: -1.0. |
|
maximum_training_duration_seconds |
Maximum training duration of the model expressed in seconds. Each learning algorithm is free to use this parameter at it sees fit. Enabling maximum training duration makes the model training non-deterministic. Default: -1.0. |
|
mhld_oblique_max_num_attributes |
For MHLD oblique splits i.e.
|
|
mhld_oblique_sample_attributes |
For MHLD oblique splits i.e.
|
|
min_examples |
Minimum number of examples in a node. Default: 5. |
|
missing_value_policy |
Method used to handle missing attribute values.
- |
|
num_candidate_attributes |
Number of unique valid attributes tested for each
node. An attribute is valid if it has at least a valid split. If
|
|
num_candidate_attributes_ratio |
Ratio of attributes tested at each node. If
set, it is equivalent to |
|
pure_serving_model |
Clear the model from any information that is not required for model serving. This includes debugging, model interpretation and other meta-data. The size of the serialized model can be reduced significatively (50% model size reduction is common). This parameter has no impact on the quality, serving speed or RAM usage of model serving. Default: False. |
|
random_seed |
Random seed for the training of the model. Learners are expected to be deterministic by the random seed. Default: 123456. |
|
sorting_strategy |
How are sorted the numerical features in order to find the splits - PRESORT: The features are pre-sorted at the start of the training. This solution is faster but consumes much more memory than IN_NODE. - IN_NODE: The features are sorted just before being used in the node. This solution is slow but consumes little amount of memory. . Default: "PRESORT". |
|
sparse_oblique_max_num_projections |
For sparse oblique splits i.e.
|
|
sparse_oblique_normalization |
For sparse oblique splits i.e.
|
|
sparse_oblique_num_projections_exponent |
For sparse oblique splits i.e.
|
|
sparse_oblique_projection_density_factor |
Density of the projections as an
exponent of the number of features. Independently for each projection,
each feature has a probability "projection_density_factor / num_features"
to be considered in the projection.
The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020)
calls this parameter |
|
sparse_oblique_weights |
For sparse oblique splits i.e.
|
|
split_axis |
What structure of split to consider for numerical features.
- |
|
uplift_min_examples_in_treatment |
For uplift models only. Minimum number of examples per treatment in a node. Default: 5. |
|
uplift_split_score |
For uplift models only. Splitter score i.e. score
optimized by the splitters. The scores are introduced in "Decision trees
for uplift modeling with single and multiple treatments", Rzepakowski et
al. Notation: |
|
validation_ratio |
Ratio of the training dataset used to create the validation dataset for pruning the tree. If set to 0, the entire dataset is used for training, and the tree is not pruned. Default: 0.1. |
|
num_threads |
Number of threads used to train the model. Different learning
algorithms use multi-threading differently and with different degree of
efficiency. If |
|
resume_training |
If true, the model training resumes from the checkpoint
stored in the |
|
working_dir |
Path to a directory available for the learning algorithm to store intermediate computation results. Depending on the learning algorithm and parameters, the working_dir might be optional, required, or ignored. For instance, distributed training algorithm always need a "working_dir", and the gradient boosted tree and hyper-parameter tuners will export artefacts to the "working_dir" if provided. |
|
resume_training_snapshot_interval_seconds |
Indicative number of seconds in
between snapshots when |
|
tuner |
If set, automatically select the best hyperparameters using the provided tuner. When using distributed training, the tuning is distributed. |
|
workers |
If set, enable distributed training. "workers" is the list of IP
addresses of the workers. A worker is a process running
|
cross_validation
cross_validation(ds: InputDataset, folds: int = 10, bootstrapping: Union[bool, int] = False, parallel_evaluations: int = 1) -> Evaluation
Cross-validates the learner and return the evaluation.
Usage example:
import pandas as pd
import ydf
dataset = pd.read_csv("my_dataset.csv")
learner = ydf.RandomForestLearner(label="label")
evaluation = learner.cross_validation(dataset)
# In a notebook, display an interractive evaluation
evaluation
# Print the evaluation
print(evaluation)
# Look at specific metrics
print(evaluation.accuracy)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds |
InputDataset
|
Dataset for the cross-validation. |
required |
folds |
int
|
Number of cross-validation folds. |
10
|
bootstrapping |
Union[bool, int]
|
Controls whether bootstrapping is used to evaluate the confidence intervals and statistical tests (i.e., all the metrics ending with "[B]"). If set to false, bootstrapping is disabled. If set to true, bootstrapping is enabled and 2000 bootstrapping samples are used. If set to an integer, it specifies the number of bootstrapping samples to use. In this case, if the number is less than 100, an error is raised as bootstrapping will not yield useful results. |
False
|
parallel_evaluations |
int
|
Number of model to train and evaluate in parallel
using multi-threading. Note that each model is potentially already
trained with multithreading (see |
1
|
Returns:
Type | Description |
---|---|
Evaluation
|
The cross-validation evaluation. |
hyperparameter_templates
classmethod
train
train(ds: InputDataset, valid: Optional[InputDataset] = None) -> RandomForestModel
Trains a model on the given dataset.
Options for dataset reading are given on the learner. Consult the documentation of the learner or ydf.create_vertical_dataset() for additional information on dataset reading in YDF.
Usage example:
import ydf
import pandas as pd
train_ds = pd.read_csv(...)
learner = ydf.CartLearner(label="label")
model = learner.train(train_ds)
print(model.summary())
If training is interrupted (for example, by interrupting the cell execution in Colab), the model will be returned to the state it was in at the moment of interruption.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ds |
InputDataset
|
Training dataset. |
required |
valid |
Optional[InputDataset]
|
Optional validation dataset. Some learners, such as Random Forest, do not need validation dataset. Some learners, such as GradientBoostedTrees, automatically extract a validation dataset from the training dataset if the validation dataset is not provided. |
None
|
Returns:
Type | Description |
---|---|
RandomForestModel
|
A trained model. |