pip install ydf -U
import ydf
import pandas as pd
import os
For a general introduction to distributed training with YDF, see the YDF Distributed Training tutorial.
YDF internal examples available at go/ydf/examples demonstrate how to use distributed training on Google infrastructure.
Introduction¶
By default YDF trains a model using a single computer. This works well for datasets with less than a few millions examples, but this does not work for datasets with billions of examples. YDF distributed training solves this problem by dividing the computation over multiple machines. As a rule of thumb, start distributed training once the dataset size exceeds 100M examples.
Vertex AI is a service of Google Cloud to train ML models (and other things) on many computers. This tutorial shows how to train a YDF model without and with distributed training with Vertex AI.
If you are unfamiliar with YDF, make sure to read the Getting Started tutorial first.
Login and setup Google Cloud and Vertex AI¶
In this tutorial, we use the gcloud CLI. Make sure it is installed.
The commands from this tutorial can be typed in a shell or in a colab cell (with the !
or %%bash
prefix).
Note that the gcloud auth login
command does not always work in Jupyter Notebooks. In this case, typing it in a shell is better.
The first step is to login and set our project. In a shell, use the command:
gcloud auth login
gcloud config set project <PROJECT_ID>
In a Google Colab, you can do the following instead:
from google.colab import auth
auth.authenticate_user(project_id=PROJECT_ID)
In this example, we use the project id is custom-oasis-452410-c2
, but to run this example you need to create your own project.
Next we need to enable two cloud services: VertexAI (previously known as AI Platform) and Cloud Build (to build the dockers).
!gcloud services enable cloudbuild.googleapis.com
!gcloud services enable aiplatform.googleapis.com
Google Cloud automatically creates a "service account" named 282665763673-compute@developer.gserviceaccount.com
. You can find it in the Google Cloud console or my typing:
!gcloud projects describe custom-oasis-452410-c2 --format="value(projectNumber)"
282665763673
Note: The project ID and project number are two different identifiers.
The service account will be responsible for the docker packing and running the job. For this, you need to give it those permissions.
%%bash
gcloud projects add-iam-policy-binding custom-oasis-452410-c2 \
--member="serviceAccount:282665763673-compute@developer.gserviceaccount.com" \
--role="roles/storage.objectViewer"
gcloud projects add-iam-policy-binding custom-oasis-452410-c2 \
--member="serviceAccount:282665763673-compute@developer.gserviceaccount.com" \
--role="roles/run.builder"
gcloud projects add-iam-policy-binding custom-oasis-452410-c2 \
--member="serviceAccount:282665763673-compute@developer.gserviceaccount.com" \
--role="roles/artifactregistry.createOnPushWriter"
Now that Google Cloud is configured, we can configure the model :).
Preparing the data¶
First we need a dataset. A good option is to use CSV, Avro or TensorFlowRecord files in a bucket. We will use CSV in this example.
For the training to be efficient, the dataset needs to be divided into several files (also known as "sharding"). In this section, we download the "adult" dataset, divide it into pieces, and save them in a bucket.
Ideally, the number of shards should be ~10x the number of workers. So if you train with 20 workers, splitting the data into 200 pieces is a good idea.
The adult dataset is a small dataset with only ~30k examples. It does not need distributed training, but it is good for the demonstration.
First, let's create a Bucket where we will store the dataset, model, and temporary files.
!gcloud storage buckets create gs://ydf_bucket --location=us-east1
Creating gs://ydf_bucket/...
Then, let's download a dataset.
Note: For a real large dataset, you will likely export the data using Google Bigtable or generate it with Apache Beam.
ds_path = "https://raw.githubusercontent.com/google/yggdrasil-decision-forests/main/yggdrasil_decision_forests/test_data/dataset"
train_ds = pd.read_csv(f"{ds_path}/adult_train.csv")
test_ds = pd.read_csv(f"{ds_path}/adult_test.csv")
print("The dataset has",len(train_ds),"training examples")
The dataset has 22792 training examples
Let's split the dataset and upload it to our bucket.
def split_dataset(
dataset: pd.DataFrame, tmp_dir: str, num_shards: int
) -> list[str]:
"""Splits a csv file into multiple csv files."""
os.makedirs(tmp_dir,exist_ok=True)
num_row_per_shard = (dataset.shape[0] + num_shards - 1) // num_shards
paths = []
for shard_idx in range(num_shards):
begin_idx = shard_idx * num_row_per_shard
end_idx = (shard_idx + 1) * num_row_per_shard
shard_dataset = dataset.iloc[begin_idx:end_idx]
shard_path = os.path.join(tmp_dir , f"shard_{shard_idx}.csv")
paths.append(shard_path)
shard_dataset.to_csv(shard_path, index=False)
return paths
split_dataset(train_ds, "gs://ydf_bucket/train_dataset", 10)
['gs://ydf_bucket/train_dataset/shard_0.csv', 'gs://ydf_bucket/train_dataset/shard_1.csv', 'gs://ydf_bucket/train_dataset/shard_2.csv', 'gs://ydf_bucket/train_dataset/shard_3.csv', 'gs://ydf_bucket/train_dataset/shard_4.csv', 'gs://ydf_bucket/train_dataset/shard_5.csv', 'gs://ydf_bucket/train_dataset/shard_6.csv', 'gs://ydf_bucket/train_dataset/shard_7.csv', 'gs://ydf_bucket/train_dataset/shard_8.csv', 'gs://ydf_bucket/train_dataset/shard_9.csv']
Using the gcloud storage ls
command, we can make sure the dataset is there.
!gcloud storage ls gs://ydf_bucket/train_dataset
gs://ydf_bucket/train_dataset/shard_0.csv gs://ydf_bucket/train_dataset/shard_1.csv gs://ydf_bucket/train_dataset/shard_2.csv gs://ydf_bucket/train_dataset/shard_3.csv gs://ydf_bucket/train_dataset/shard_4.csv gs://ydf_bucket/train_dataset/shard_5.csv gs://ydf_bucket/train_dataset/shard_6.csv gs://ydf_bucket/train_dataset/shard_7.csv gs://ydf_bucket/train_dataset/shard_8.csv gs://ydf_bucket/train_dataset/shard_9.csv
Let's also save the testing dataset.
We will use it for validation.
split_dataset(test_ds, "gs://ydf_bucket/valid_dataset", 10)
['gs://ydf_bucket/valid_dataset/shard_0.csv', 'gs://ydf_bucket/valid_dataset/shard_1.csv', 'gs://ydf_bucket/valid_dataset/shard_2.csv', 'gs://ydf_bucket/valid_dataset/shard_3.csv', 'gs://ydf_bucket/valid_dataset/shard_4.csv', 'gs://ydf_bucket/valid_dataset/shard_5.csv', 'gs://ydf_bucket/valid_dataset/shard_6.csv', 'gs://ydf_bucket/valid_dataset/shard_7.csv', 'gs://ydf_bucket/valid_dataset/shard_8.csv', 'gs://ydf_bucket/valid_dataset/shard_9.csv']
Create docker¶
To run in VectexAI, the code cannot be executed in a notebook. Instead, the training code needs to be packaged in a Docker.
To pass the dataset path and other options to the training program, we use the argparse
library. We also add an option to enable or disable distributed training. This will be useful to test the trainer quickly.
%%writefile train.py
import argparse
import dataclasses
import json
import os
from typing import Any, Dict, List, Optional, Sequence, Tuple, Union
import ydf
parser = argparse.ArgumentParser()
# Path to training dataset. Should be prefixed with the dataset type e.g. 'csv':.
# See the supported formats at https://ydf.readthedocs.io/en/latest/dataset_formats/
parser.add_argument("--train_ds", type=str, required=True)
# Path to validation dataset. If empty, the model is trained without validation.
parser.add_argument("--valid_ds", type=str)
# Path to test dataset. If empty, the model is not evaluated.
parser.add_argument("--test_ds", type=str)
# Path to save the model.
parser.add_argument("--model", type=str, required=True)
# Work directory containing the temporary working data. Only used for distributed training.
parser.add_argument("--work_dir", default="", type=str)
# Label column to predict.
parser.add_argument("--label", type=str, required=True)
# Is the training distributed, or on a single machine?
parser.add_argument("--distributed", action="store_true")
args = parser.parse_args()
def main():
print("Arguments:\n", args)
if args.distributed:
main_distributed()
else:
main_in_process()
def main_in_process():
ydf.verbose(2)
print("Train model in process on", args.train_ds)
learner = ydf.GradientBoostedTreesLearner(label=args.label)
model = learner.train(args.train_ds, valid=args.valid_ds)
print("Save model in", args.model)
model.save(args.model)
if args.test_ds:
print("Evaluate model on", args.test_ds)
evaluation = model.evaluate(args.test_ds)
print(evaluation)
def main_distributed():
ydf.verbose(2)
# Gather the manager and workers configuration.
cluster_config = ydf.util.get_vertex_ai_cluster_spec()
print("cluster_config:\n", cluster_config)
if cluster_config.is_worker:
# This machine is running a worker.
ydf.start_worker(cluster_config.port)
return
print("Train model with distribution on", args.train_ds)
learner = ydf.DistributedGradientBoostedTreesLearner(
label=args.label,
workers=cluster_config.workers,
working_dir=args.work_dir,
resume_training=True,
)
model = learner.train(args.train_ds, valid=args.valid_ds)
print("Save model in", args.model)
model.save(args.model)
if args.test_ds:
print("Evaluate model on", args.test_ds)
evaluation = model.evaluate(args.test_ds)
print(evaluation)
if __name__ == "__main__":
main()
Starting a job on VertexAI takes a few minutes. Instead, to iterate quickly, it is a good idea to run the training script locally on a subset of the data.
The following command runs our trainer locally without distributed training.
Note: In YDF, the dataset paths always define the format of the dataset with a prefix (csv:
in this example). To use another format, change the prefix accordingly. Here is the list of supported formats.
%%bash
python train.py --train_ds=csv:gs://ydf_bucket/train_dataset/shard_0.csv \
--valid_ds=csv:gs://ydf_bucket/valid_dataset/shard_0.csv \
--model=gs://ydf_bucket/model \
--label=income
To be run in VertexAI, the trainer needs to be packaged in Docker. Let's create it.
%%writefile Dockerfile
FROM python:3.12
WORKDIR /root
RUN apt-get update && apt-get -y install sudo
RUN rm -rf /usr/share/keyrings/cloud.google.gpg
RUN rm -rf /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
RUN echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
# Install YDF from Pip
RUN python3 -m pip install ydf
# OR, install your own copy of YDF.
# COPY ydf-0.10.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl .
# RUN python3 -m pip install ydf-0.10.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl --upgrade --no-cache-dir --force-reinstall
RUN echo '[GoogleCompute]\nservice_account = default' > /etc/boto.cfg
COPY train.py /root/train.py
ENTRYPOINT ["python3", "train.py"]
Overwriting Dockerfile
We can now compile the docker and upload it to Google Cloud.
%%bash
gcloud builds submit --tag gcr.io/custom-oasis-452410-c2/train-ydf
Finally, we can start a custom Vertex AI training job with our docker.
A few remarks:
You need to create two worker pools. The first worker pool contains the "manager" and will do very little computation. The second worker pool contains the machines that will train and evaluate the model. When training on a larger dataset, you need to increase the number of machines with the replica-count
parameter.
%%bash
gcloud ai custom-jobs create \
--region=us-east1 \
--project=custom-oasis-452410-c2 \
--worker-pool-spec=replica-count=1,machine-type='n1-highmem-2',container-image-uri='gcr.io/custom-oasis-452410-c2/train-ydf' \
--worker-pool-spec=replica-count=5,machine-type='n1-highmem-2',container-image-uri='gcr.io/custom-oasis-452410-c2/train-ydf' \
--display-name=train-ydf-job \
--args=\
--train_ds=csv:gs://ydf_bucket/train_dataset/shard_*.csv,\
--valid_ds=csv:gs://ydf_bucket/valid_dataset/shard_*.csv,\
--model=gs://ydf_bucket/model,\
--work_dir=gs://ydf_bucket/work_dir,\
--distributed,\
--label=income
Using endpoint [https://us-east1-aiplatform.googleapis.com/] CustomJob [projects/282665763673/locations/us-east1/customJobs/7048111014285410304] is submitted successfully. Your job is still active. You may view the status of your job with the command ud ai custom-jobs describe projects/282665763673/locations/us-east1/customJobs/7048111014285410304 or continue streaming the logs with the command 65763673/locations/us-east1/customJobs/7048111014285410304
You can monitor the training in the Vertex AI Custom Job console, or in your shell by running the printed command e.g.:
!gcloud ai custom-jobs stream-logs projects/282665763673/locations/us-east1/customJobs/8426212500260782080
Note: This command does not stop when the training is done. You need to stop it manually.
Load and test the model¶
Now that your training is done, you can look at the model:
model = ydf.load_model("gs://ydf_bucket/model")
model.describe()
Task : CLASSIFICATION
Label : income
Features (14) : age workclass fnlwgt education education_num marital_status occupation relationship race sex capital_gain capital_loss hours_per_week native_country
Weights : None
Trained with tuner : No
Model size : 833 kB
Number of records: 2280 Number of columns: 15 Number of columns by type: CATEGORICAL: 9 (60%) NUMERICAL: 6 (40%) Columns: CATEGORICAL: 9 (60%) 1: "workclass" CATEGORICAL num-nas:123 (5.39474%) has-dict vocab-size:7 num-oods:2 (0.0927214%) most-frequent:"Private" 1586 (73.528%) 3: "education" CATEGORICAL has-dict vocab-size:17 zero-ood-items most-frequent:"HS-grad" 764 (33.5088%) 5: "marital_status" CATEGORICAL has-dict vocab-size:7 num-oods:3 (0.131579%) most-frequent:"Married-civ-spouse" 1052 (46.1404%) 6: "occupation" CATEGORICAL num-nas:123 (5.39474%) has-dict vocab-size:14 zero-ood-items most-frequent:"Craft-repair" 323 (14.9745%) 7: "relationship" CATEGORICAL has-dict vocab-size:7 zero-ood-items most-frequent:"Husband" 935 (41.0088%) 8: "race" CATEGORICAL has-dict vocab-size:6 zero-ood-items most-frequent:"White" 1968 (86.3158%) 9: "sex" CATEGORICAL has-dict vocab-size:3 zero-ood-items most-frequent:"Male" 1543 (67.6754%) 13: "native_country" CATEGORICAL num-nas:39 (1.71053%) has-dict vocab-size:17 num-oods:49 (2.18652%) most-frequent:"United-States" 2043 (91.1647%) 14: "income" CATEGORICAL manually-defined has-dict vocab-size:3 zero-ood-items most-frequent:"<=50K" 1731 (75.9211%) NUMERICAL: 6 (40%) 0: "age" NUMERICAL mean:39.0351 min:17 max:90 sd:13.7531 2: "fnlwgt" NUMERICAL mean:190565 min:19914 max:1.22658e+06 sd:106876 4: "education_num" NUMERICAL mean:10.0732 min:1 max:16 sd:2.55387 10: "capital_gain" NUMERICAL mean:1054 min:0 max:99999 sd:7620.32 11: "capital_loss" NUMERICAL mean:79.7206 min:0 max:3683 sd:387.973 12: "hours_per_week" NUMERICAL mean:40.5614 min:1 max:99 sd:12.3992 Terminology: nas: Number of non-available (i.e. missing) values. ood: Out of dictionary. manually-defined: Attribute whose type is manually defined by the user, i.e., the type was not automatically inferred. tokenized: The attribute value is obtained through tokenization. has-dict: The attribute is attached to a string dictionary e.g. a categorical attribute stored as a string. vocab-size: Number of unique values.
The following evaluation is computed on the validation or out-of-bag dataset.
Task: CLASSIFICATION Label: income Loss (BINOMIAL_LOG_LIKELIHOOD): 0.643922 Accuracy: 0.860798 CI95[W][0 1] ErrorRate: : 0.139202 Confusion Table: truth\prediction <=50K >50K <=50K 678 49 >50K 87 163 Total: 977
Variable importances measure the importance of an input feature for a model.
1. "relationship" 0.278801 ################ 2. "capital_gain" 0.260187 ############# 3. "occupation" 0.228313 ######## 4. "education" 0.224940 ####### 5. "age" 0.217297 ###### 6. "hours_per_week" 0.212136 ##### 7. "fnlwgt" 0.199807 ### 8. "capital_loss" 0.199368 ### 9. "workclass" 0.197884 ### 10. "marital_status" 0.190656 ## 11. "education_num" 0.188540 # 12. "native_country" 0.183821 13. "race" 0.182820 14. "sex" 0.177710
1. "relationship" 29.000000 ################ 2. "capital_gain" 20.000000 ########## 3. "hours_per_week" 7.000000 ### 4. "age" 6.000000 ## 5. "capital_loss" 6.000000 ## 6. "marital_status" 3.000000 # 7. "workclass" 2.000000 8. "fnlwgt" 2.000000 9. "race" 2.000000 10. "native_country" 1.000000
1. "occupation" 281.000000 ################ 2. "education" 223.000000 ############ 3. "fnlwgt" 220.000000 ############ 4. "age" 187.000000 ########## 5. "capital_gain" 148.000000 ######## 6. "hours_per_week" 133.000000 ####### 7. "relationship" 113.000000 ###### 8. "workclass" 93.000000 #### 9. "marital_status" 87.000000 #### 10. "capital_loss" 74.000000 ### 11. "education_num" 41.000000 # 12. "native_country" 32.000000 # 13. "sex" 18.000000 14. "race" 12.000000
1. "relationship" 473.506931 ################ 2. "capital_gain" 203.793310 ###### 3. "education" 178.370454 ##### 4. "education_num" 133.982814 #### 5. "occupation" 108.582365 ### 6. "age" 89.963566 ## 7. "fnlwgt" 77.287484 ## 8. "hours_per_week" 62.848679 ## 9. "capital_loss" 59.407506 # 10. "workclass" 36.896641 # 11. "marital_status" 22.519862 12. "native_country" 12.217495 13. "race" 3.595908 14. "sex" 3.356631
Those variable importances are computed during training. More, and possibly more informative, variable importances are available when analyzing a model on a test dataset.
Only printing the first tree.
Tree #0: "relationship" is in [BITMAP] {<OOD>, Husband, Wife} [s:0.0382383 n:2280 np:1042 miss:1] ; pred:3.43208e-10 ├─(pos)─ "education" is in [BITMAP] {Bachelors, Masters, Prof-school, Doctorate} [s:0.0403089 n:1042 np:302 miss:0] ; pred:0.116594 | ├─(pos)─ "age">=28.5 [s:0.00988429 n:302 np:287 miss:1] ; pred:0.288509 | | ├─(pos)─ "occupation" is in [BITMAP] {Exec-managerial, Prof-specialty, Sales, Other-service, Tech-support} [s:0.00590419 n:287 np:247 miss:0] ; pred:0.300942 | | | ├─(pos)─ "hours_per_week">=41 [s:0.00534521 n:247 np:118 miss:0] ; pred:0.317856 | | | | ├─(pos)─ pred:0.359672 | | | | └─(neg)─ pred:0.279607 | | | └─(neg)─ "fnlwgt">=281926 [s:0.0228571 n:40 np:5 miss:0] ; pred:0.196494 | | | ├─(pos)─ pred:-0.0223125 | | | └─(neg)─ pred:0.227752 | | └─(neg)─ "occupation" is in [BITMAP] {<OOD>, Prof-specialty, Sales, Adm-clerical, Other-service, Machine-op-inspct, Transport-moving, Handlers-cleaners, Farming-fishing, Tech-support, ...[2 left]} [s:0.031746 n:15 np:8 miss:0] ; pred:0.050623 | | ├─(pos)─ pred:0.141792 | | └─(neg)─ pred:-0.0535706 | └─(neg)─ "capital_gain">=5095.5 [s:0.0186913 n:740 np:32 miss:0] ; pred:0.0464341 | ├─(pos)─ "fnlwgt">=112632 [s:0.00527344 n:32 np:27 miss:1] ; pred:0.398206 | | ├─(pos)─ pred:0.415301 | | └─(neg)─ pred:0.305897 | └─(neg)─ "capital_loss">=1791.5 [s:0.00843925 n:708 np:30 miss:0] ; pred:0.0305348 | ├─(pos)─ "capital_loss">=1989.5 [s:0.0938889 n:30 np:10 miss:0] ; pred:0.26943 | | ├─(pos)─ pred:0.0323891 | | └─(neg)─ pred:0.38795 | └─(neg)─ "education" is in [BITMAP] {<OOD>, HS-grad, Some-college, Bachelors, Masters, Assoc-voc, Assoc-acdm, Prof-school, 12th, Doctorate} [s:0.00686782 n:678 np:577 miss:1] ; pred:0.0199643 | ├─(pos)─ pred:0.0389306 | └─(neg)─ pred:-0.0883878 └─(neg)─ "capital_gain">=4718.5 [s:0.0153776 n:1238 np:29 miss:0] ; pred:-0.0981348 ├─(pos)─ "occupation" is in [BITMAP] {Craft-repair, Exec-managerial, Prof-specialty, Sales, Transport-moving} [s:0.0444808 n:29 np:24 miss:1] ; pred:0.33985 | ├─(pos)─ "hours_per_week">=53.5 [s:0.00659722 n:24 np:5 miss:0] ; pred:0.392508 | | ├─(pos)─ pred:0.305897 | | └─(neg)─ pred:0.415301 | └─(neg)─ pred:0.0870908 └─(neg)─ "education_num">=12.5 [s:0.00245124 n:1209 np:240 miss:0] ; pred:-0.108641 ├─(pos)─ "age">=30.5 [s:0.00913471 n:240 np:147 miss:1] ; pred:-0.0542218 | ├─(pos)─ "occupation" is in [BITMAP] {<OOD>, Exec-managerial, Prof-specialty, Adm-clerical, Machine-op-inspct, Protective-serv, Priv-house-serv} [s:0.00900639 n:147 np:110 miss:0] ; pred:-0.0126374 | | ├─(pos)─ pred:0.0174705 | | └─(neg)─ pred:-0.102147 | └─(neg)─ "fnlwgt">=276854 [s:0.00222569 n:93 np:16 miss:0] ; pred:-0.119952 | ├─(pos)─ pred:-0.0633387 | └─(neg)─ pred:-0.131716 └─(neg)─ "hours_per_week">=42.5 [s:0.000376868 n:969 np:166 miss:0] ; pred:-0.122119 ├─(pos)─ "occupation" is in [BITMAP] {<OOD>, Prof-specialty, Sales, Transport-moving} [s:0.00418084 n:166 np:40 miss:0] ; pred:-0.098763 | ├─(pos)─ pred:-0.0359879 | └─(neg)─ pred:-0.118692 └─(neg)─ "occupation" is in [BITMAP] {<OOD>, Prof-specialty} [s:9.2485e-05 n:803 np:39 miss:0] ; pred:-0.126947 ├─(pos)─ pred:-0.103664 └─(neg)─ pred:-0.128136
We can also generate some predictions.
model.predict(test_ds)
array([0.00404093, 0.35932407, 0.8662793 , ..., 0.01358805, 0.04585141, 0.00885384], shape=(9769,), dtype=float32)
Deploying model¶
The model can now be deployed. YDF offers several options: C++, FastAPI, TensorFlow Serving, etc. See the "Deploying a model" section on the left for more details.
To give an example, let's deploy the model with FastAPI:
model.to_docker("/tmp/docker_model")
!ls -l /tmp/docker_model
total 24 -rw-r----- 1 gbm primarygroup 288 Mar 6 14:22 deploy_in_google_cloud.sh -rw-r----- 1 gbm primarygroup 211 Mar 6 14:22 Dockerfile -rw-r----- 1 gbm primarygroup 1313 Mar 6 14:22 main.py drwxr-x--- 2 gbm primarygroup 140 Mar 6 14:22 model -rw-r----- 1 gbm primarygroup 360 Mar 6 14:22 readme.txt -rw-r----- 1 gbm primarygroup 26 Mar 6 14:22 requirements.txt -rw-r----- 1 gbm primarygroup 485 Mar 6 14:22 test_locally.sh
Deploy the model in Google Cloud Run.
The results will be available in the Google Cloud Run console.
%%bash
# Enable Google Cloud Run
gcloud services enable run.googleapis.com
# Deploy the model as a service
gcloud run deploy ydf-predict --source /tmp/docker_model --region=us-east1
Deploying from source requires an Artifact Registry Docker repository to store repository named [cloud-run-source-deploy] in region [us-east1] will be created. ntinue (Y/n)? co Allow unauthenticated invocations to [ydf-predict] (y/N)? Building using Dockerfile and deploying container to Cloud Run service [ydf-predict] in project [custom-oasis-452410-c2] region [us-east1] Building and deploying new service... Creating Container Repository................................................................................................................done Uploading sources..................Creating temporary archive of 11 file(s) totalling 228.1 KiB before compression. Uploading zipfile of [/tmp/docker_model] to [gs://run-sources-custom-oasis-452410-c2-us-east1/services/ydf-predict/1741267781.721195-c0f972bb00c345d2ae780b88d7a4f65d.zip] ......done Building Container.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done Creating Revision........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done Routing traffic.....done Done. Service [ydf-predict] revision [ydf-predict-00001-8dc] has been deployed and is serving 100 percent of traffic. [mrvice URL: https://ydf-predict-282665763673.us-east1.run.app