Pandas Dataframe¶
Setup¶
pip install ydf pandas -U
import ydf
import pandas as pd
import numpy as np
# Create a small dataframe with different column types.
df = pd.DataFrame(
{"feature_1": [1, 2, 3, 1] * 20, # A numerical feature
"feature_2": ["X", "X", "Y", "Y"] * 20, # A categorical feature
"feature_3": [True, False, True, False ] * 20, # A boolean feature
"label": [True, True, False, False ] * 20, # The labels
})
df.head()
feature_1 | feature_2 | feature_3 | label | |
---|---|---|---|---|
0 | 1 | X | True | True |
1 | 2 | X | False | True |
2 | 3 | Y | True | False |
3 | 1 | Y | False | False |
4 | 1 | X | True | True |
We can directly train a model on this dataframe.
# Train a model.
model = ydf.RandomForestLearner(label="label").train(df)
Train model on 80 examples Model trained in 0:00:00.003959
model.describe()
Task : CLASSIFICATION
Label : label
Features (3) : feature_1 feature_2 feature_3
Weights : None
Trained with tuner : No
Model size : 257 kB
Number of records: 80 Number of columns: 4 Number of columns by type: CATEGORICAL: 2 (50%) BOOLEAN: 1 (25%) NUMERICAL: 1 (25%) Columns: CATEGORICAL: 2 (50%) 0: "label" CATEGORICAL has-dict vocab-size:3 zero-ood-items most-frequent:"false" 40 (50%) dtype:DTYPE_BOOL 2: "feature_2" CATEGORICAL has-dict vocab-size:3 zero-ood-items most-frequent:"X" 40 (50%) dtype:DTYPE_BYTES BOOLEAN: 1 (25%) 3: "feature_3" BOOLEAN true_count:40 false_count:40 dtype:DTYPE_BOOL NUMERICAL: 1 (25%) 1: "feature_1" NUMERICAL mean:1.75 min:1 max:3 sd:0.829156 dtype:DTYPE_FLOAT64 Terminology: nas: Number of non-available (i.e. missing) values. ood: Out of dictionary. manually-defined: Attribute whose type is manually defined by the user, i.e., the type was not automatically inferred. tokenized: The attribute value is obtained through tokenization. has-dict: The attribute is attached to a string dictionary e.g. a categorical attribute stored as a string. vocab-size: Number of unique values.
The following evaluation is computed on the validation or out-of-bag dataset.
Number of predictions (without weights): 80 Number of predictions (with weights): 80 Task: CLASSIFICATION Label: label Accuracy: 1 CI95[W][0.963246 1] LogLoss: : 0 ErrorRate: : 0 Default Accuracy: : 0.5 Default LogLoss: : 0.693147 Default ErrorRate: : 0.5 Confusion Table: truth\prediction false true false 40 0 true 0 40 Total: 80
Variable importances measure the importance of an input feature for a model.
1. "feature_2" 1.000000
1. "feature_2" 300.000000
1. "feature_2" 300.000000
1. "feature_2" 16479.940276
Those variable importances are computed during training. More, and possibly more informative, variable importances are available when analyzing a model on a test dataset.
Only printing the first tree.
Tree #0: "feature_2" is in [BITMAP] {X} [s:0.692835 n:80 np:39 miss:0] ; val:"false" prob:[0.5125, 0.4875] ├─(pos)─ val:"true" prob:[0, 1] └─(neg)─ val:"false" prob:[1, 0]
Train a model on a subset of features¶
By default, all the available columns are used by the model. Instead, you can restrict YDF to only use some of the features.
Train a model on feature_1
and feature_2
only.
model = ydf.RandomForestLearner(
label="label",
features=["feature_1", "feature_2"]
).train(df)
print("Model input features:", model.input_feature_names())
Train model on 80 examples Model trained in 0:00:00.003908 Model input features: ['feature_1', 'feature_2']
Override the feature semantics¶
To consume a feature, the model needs to know how to interpret this feature. This is called the feature "semantic". YDF support four types of feature semantics:
- Numerical: For quantities or measures.
- Categorical: For categories or enums.
- Boolean: A special type of categorical with only two categories True and False.
- Categorical-set: For sets of categories, tags, or bag of words.
YDF automatically determine the semantic of a feature according to its representation. For example, float and int alues are automatically detected a numerical.
For example, here are the semantics of the model trained above:
model.input_features()
[InputFeature(name='feature_1', semantic=<Semantic.NUMERICAL: 1>, column_idx=0), InputFeature(name='feature_2', semantic=<Semantic.CATEGORICAL: 2>, column_idx=1)]
In some cases, it is interresting to force a specific semantic. For instance, if an enum-value is represented with integers, it is important to force the feature as categorical:
model = ydf.RandomForestLearner(
label="label",
features=[ydf.Feature("feature_1", ydf.Semantic.CATEGORICAL)],
include_all_columns=True # Use all the features; not just the ones in "features".
).train(df)
model.input_features()
Train model on 80 examples Model trained in 0:00:00.004236
[InputFeature(name='feature_1', semantic=<Semantic.CATEGORICAL: 2>, column_idx=0), InputFeature(name='feature_2', semantic=<Semantic.CATEGORICAL: 2>, column_idx=2), InputFeature(name='feature_3', semantic=<Semantic.BOOLEAN: 5>, column_idx=3)]