Pickling Models for Persistence#

This notebook demonstrates simple pickling of both single-GPU and multi-GPU cuML models for persistence

[ ]:
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)

Single GPU Model Pickling#

All single-GPU estimators are pickleable. The following example demonstrates the creation of a synthetic dataset, training, and pickling of the resulting model for storage. Trained single-GPU models can also be used to distribute the inference on a Dask cluster, which the Distributed Model Pickling section below demonstrates.

[ ]:
from cuml.datasets import make_blobs

X, y = make_blobs(n_samples=50,
                  n_features=10,
                  centers=5,
                  cluster_std=0.4,
                  random_state=0)
[ ]:
from cuml.cluster import KMeans

model = KMeans(n_clusters=5)

model.fit(X)
[ ]:
import pickle

pickle.dump(model, open("kmeans_model.pkl", "wb"))
[ ]:
model = pickle.load(open("kmeans_model.pkl", "rb"))
[ ]:
model.cluster_centers_

Distributed Model Pickling#

The distributed estimator wrappers inside of the cuml.dask are not intended to be pickled directly. The Dask cuML estimators provide a function get_combined_model(), which returns the trained single-GPU model for pickling. The combined model can be used for inference on a single-GPU, and the ParallelPostFit wrapper from the Dask-ML library can be used to perform distributed inference on a Dask cluster.

[ ]:
from dask.distributed import Client
from dask_cuda import LocalCUDACluster

cluster = LocalCUDACluster()
client = Client(cluster)
client
[ ]:
from cuml.dask.datasets import make_blobs

n_workers = len(client.scheduler_info()["workers"].keys())

X, y = make_blobs(n_samples=5000,
                  n_features=30,
                  centers=5,
                  cluster_std=0.4,
                  random_state=0,
                  n_parts=n_workers*5)

X = X.persist()
y = y.persist()
[ ]:
from cuml.dask.cluster import KMeans

dist_model = KMeans(n_clusters=5)
[ ]:
dist_model.fit(X)
[ ]:
import pickle

single_gpu_model = dist_model.get_combined_model()
pickle.dump(single_gpu_model, open("kmeans_model.pkl", "wb"))
[ ]:
single_gpu_model = pickle.load(open("kmeans_model.pkl", "rb"))
[ ]:
single_gpu_model.cluster_centers_

Exporting cuML Random Forest models for inferencing on machines without GPUs#

Starting with cuML version 21.06, you can export cuML Random Forest models and run predictions with them on machines without an NVIDIA GPUs. The Treelite package defines an efficient exchange format that lets you portably move the cuML Random Forest models to other machines. We will refer to the exchange format as “checkpoints.”

Here are the steps to export the model:

  1. Call to_treelite_checkpoint() to obtain the checkpoint file from the cuML Random Forest model.

[ ]:
from cuml.ensemble import RandomForestClassifier as cumlRandomForestClassifier
from sklearn.datasets import load_iris
import numpy as np

X, y = load_iris(return_X_y=True)
X, y = X.astype(np.float32), y.astype(np.int32)
clf = cumlRandomForestClassifier(max_depth=3, random_state=0, n_estimators=10)
clf.fit(X, y)

checkpoint_path = './checkpoint.tl'
# Export cuML RF model as Treelite checkpoint
clf.convert_to_treelite_model().to_treelite_checkpoint(checkpoint_path)
  1. Copy the generated checkpoint file checkpoint.tl to another machine on which you’d like to run predictions.

  2. On the target machine, install Treelite by running pip install treelite or conda install -c conda-forge treelite. The machine does not need to have an NVIDIA GPUs and does not need to have cuML installed.

  3. You can now load the model from the checkpoint, by running the following on the target machine:

[ ]:
import treelite

# The checkpoint file has been copied over
checkpoint_path = './checkpoint.tl'
tl_model = treelite.Model.deserialize(checkpoint_path)
out_prob = treelite.gtil.predict(tl_model, X, pred_margin=True)
print(out_prob)