pip install ydf tensorflow_hub tensorflow_datasets tensorflow==2.13.1 -U
import ydf # To train the model
import tensorflow_datasets # To download the movie review dataset
import tensorflow_hub # To download the pre-trained embedding
What is a pre-trained embedding?Ā¶
Pretrained embeddings are models trained on a large corpus of data that can be used to improve the quality of your model when you do not have a lot of training data. Unlike a model that is trained for a specific task and outputs predictions for that task, a pretrained embedding model outputs "embeddings," which are fixed-size numerical vectors that can be used as input features for a second model (e.g. a ydf model) to solve a variety of tasks. Pre-trained embeddings are also useful for applying a model to complex or unstructured data. For example, with an image, text, audio, or video pre-trained embedding, you can apply a YDF model to image, text, audio, and video data, respectively.
In this notebook, we will classify movie reviews as either "positive" or "negative". For instance, the review beginning with "This is the kind of film for a snowy Sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm-chair and mellow for a couple of hours. Wonderful performances from Cher and Nicolas ..." is a positive review. Our dataset contains 25000 reviews, but because 25000 reviews are NOT enough to train a good text model, and because configuring a text model is complicated, we will simply use the Universal Sentence Encoder pre-trained embedding.
Downloading datasetĀ¶
We download the dataset from the TensorFlow Dataset repository.
raw_train_ds = tensorflow_datasets.load(name="imdb_reviews", split="train")
raw_test_ds = tensorflow_datasets.load(name="imdb_reviews", split="test")
Let's look at the first 200 letters or the first 3 examples:
for example in raw_train_ds.take(3):
print(f"""\
text: {example['text'].numpy()[:200]}
label: {example['label']}
=========================""")
Downloading embeddingĀ¶
embed = tensorflow_hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
We can test the embedding on any text. It returns a vector of numbers. While those values do not have inherent meaning to us, YDF is very good at consuming them.
embeddings = embed([
"The little blue dog eats a piece of ham.",
"It is raining today."]).numpy()
print(embeddings)
Apply embedding on datasetĀ¶
We can apply the embedding to our dataset. Since the dataset and the embedding are both created with TensorFlow, we will prepare a TensorFlow Dataset and feed it directly into YDF. YDF natively consumes TensorFlow Datasets.
def apply_embedding(batch):
batch["text"] = embed(batch["text"])
return batch
# The batch-size (256) has not impact on the YDF model. However,
# reading a TensorFlow dataset with a small (<50) batch size might
# be slow. Use a large batch size increases memory usage.
train_ds = raw_train_ds.batch(256).map(apply_embedding)
test_ds = raw_test_ds.batch(256).map(apply_embedding)
Let's show the first 10 dimensions of the embedding for the 3 examples in the first batch examples.
for example in train_ds.take(1):
print(f"""\
text: {example['text'].numpy()[:3, :10]}
label: {example['label'].numpy()[:3]}
=========================""")
Training a pre-trained embedding modelĀ¶
model = ydf.GradientBoostedTreesLearner(label="label").train(train_ds)
We can observe the 512 dimensions of the embedding. In the "variable importance" tab, we see that not all dimensions of the embedding are equally useful. For example, the feature text.111_of_512
is very useful for the model.
model.describe()
Evaluating modelĀ¶
We evaluate the model on the test dataset.
model.evaluate(test_ds)
The model accuracy is ~85%. Not too bad for a model trained in a few seconds with default hyper-parameters :)