HPO for Random Forest with Ray Tune and cuML#
This notebook demonstrates how to perform hyperparameter optimization (HPO) for a Random Forest classifier using Ray Tune and cuML. We’ll use Ray Tune to efficiently search through hyperparameter combinations while leveraging cuML’s GPU-accelerated Random Forest implementation for faster training.
Problem Overview#
We’re solving a binary classification problem using the airline dataset, where we predict flight delays. The goal is to find the optimal hyperparameters (number of estimators, max depth, and max features) that maximize the model’s accuracy. Ray Tune will orchestrate multiple training trials in parallel, each testing different hyperparameter combinations, while cuML provides GPU acceleration for each individual model training.
Setup Instructions#
Brev#
See Documentation
For the purpose of this example, follow Option 1 (Setting up your Brev GPU Environment) in the Brev Instance Setup section:
Create a GPU environment with 4 L4 GPUs
Make sure to include Jupyter in your setup
Wait until the “Open Notebook” button is flashing
Open the Notebook and navigate to a Jupyter terminal
Environment Setup#
Check Your CUDA Version in the Jupyter terminal
Before installing dependencies, verify your CUDA version (shown in the top right corner of the output):
nvidia-smi
Create a file named
pyproject.tomland copy the content below
Based on your CUDA version you have, modify the cuML package:
CUDA 12.x: Use
cuml-cu12==26.2.*CUDA 13.x: Change to
cuml-cu13==26.2.*
The pyproject.toml file should look like this:
[project]
name = "ray-cuml"
version = "0.1.0"
requires-python = "==3.13.*"
dependencies = [
"ray[default]==2.53.0",
"ray[data]==2.53.0",
"ray[train]==2.53.0",
"ray[tune]==2.53.0",
"cuml-cu12==26.2.*", # Change cu12 to cu13 if you have CUDA 13.x
"jupyterlab-nvdashboard",
"ipykernel",
"ipywidgets",
]
Install Dependencies
uv sync
Enable Jupyter nvdashboard
We can use the jupyterlab-nvdashboard extension monitor GPU usage in Jupyter
To be able to enable the nvdashboard jupyter extension, installed in as part of the setup,
Restart Jupyter:
sudo systemctl restart jupyter.serviceExit and reopen the notebook or refresh your browser
When installing libraries with conda each individual CUDA library can be installed as a conda package. So we don’t need to ensure any of the CUDA libraries already exist in /usr/local/cuda.
Install JupyterLab nvdashboard Extension
Important: Even though you’re using conda for this setup, the JupyterLab nvdashboard extension must be installed using uv (which is already available in the system). This is because JupyterLab extensions need to be installed where the JupyterLab server runs, not where individual kernels run. In the current setup, the JupyterLab server runs from /home/ubuntu/.venv/ (system uv environment), so we need to install the extension using uv:
uv pip install jupyterlab_nvdashboard
sudo systemctl restart jupyter.service
Exit and reopen the notebook, and go back to a Jupyter terminal.
Install Miniforge
If you prefer to use conda, you need to install it first:
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh # Follow the prompts and choose yes to update your shell profile to automatically initialize conda
Note
You’ll need to source your .bashrc to make conda available in your current shell:
source ~/.bashrc
Check Your CUDA Version
Check the CUDA version available on your system:
nvidia-smi
Create Environment File
Create a file named env.yaml and copy the content below. Modify the cuda-version to match your CUDA version (e.g., 12.8 or 13.0):
name: ray-cuml
channels:
- rapidsai
- conda-forge
dependencies:
- python=3.13
- "ray-default=2.53.0"
- "ray-data=2.53.0"
- "ray-train=2.53.0"
- "ray-tune=2.53.0"
- cuml=26.02
- "cuda-version=12.8" # Change to match your CUDA version (e.g., 12.8 or 13.0)
- ipykernel
- ipywidgets
Create and Activate Conda Environment
Create a new conda environment using the env.yaml file:
conda env create -f env.yaml
conda activate ray-cuml
Install Jupyter Kernel
Install the Jupyter kernel for this environment:
python -m ipykernel install --user --name ray-cuml --display-name "Python (ray-cuml)" --env PATH "$CONDA_PREFIX/bin:$PATH"
After running this, refresh your browser, open a new notebook and select the “Python (ray-cuml)” kernel.
Getting Started#
Download this notebook and the get_data.pyscript from the side panel and upload them to Jupyter, then run through the notebook.
You should now see a button on the left panel that looks like a GPU, which will give you several dashboards to choose from. For the sake of this example, we will look at GPU memory and GPU Utilization.

Data Preparation#
Make sure the get_data.py script is the same directory that current jupyter working directory. We will use this script to get the airline dataset.
The script supports both a small dataset (for quick testing) and a full dataset (20M rows). By default, it downloads the small dataset. Use the --full-dataset flag for the complete dataset.
! python get_data.py --full-dataset ## for a smaller dataset remove --full-dataset
import pandas as pd
import ray
from cuml.ensemble import RandomForestClassifier
from cuml.metrics import accuracy_score
from ray import tune
from ray.tune import RunConfig, TuneConfig
from sklearn.model_selection import train_test_split
def train_rf(config, data_dict):
"""
Training function for Ray Tune.
Args:
config: Dictionary of hyperparameters from Ray Tune
data_dict: Dictionary containing training and test data (NumPy arrays)
"""
# Extract data
X_train = data_dict["X_train"]
X_test = data_dict["X_test"]
y_train = data_dict["y_train"]
y_test = data_dict["y_test"]
# Initialize cuML Random Forest with hyperparameters from config
rf = RandomForestClassifier(
n_estimators=config["n_estimators"],
max_depth=config["max_depth"],
max_features=config["max_features"],
random_state=42,
)
# Train the model
rf.fit(X_train, y_train)
# Evaluate on test set
predictions = rf.predict(X_test)
# Calculate accuracy using cuML's metric function
score = accuracy_score(y_test, predictions)
# Report metrics back to Ray Tune
return {"accuracy": score}
Ray Tune Hyperparameter Search#
Now we’ll set up Ray Tune to search for optimal hyperparameters. Ray Tune will run multiple trials in parallel, each testing different combinations of hyperparameters. Each trial will train a cuML Random Forest model on a GPU and evaluate its performance.
Important: Modify the following according to your setup:
ray.init()parameters: Adjustnum_cpusandnum_gpusbased on your available resources if you are not using the Brev instance indicated.storage_pathinRunConfig: Set a valid local path to save Ray Tune resultsresourcesintune.with_resources(): Configure CPU and GPU allocation per trial
# Initialize Ray with resource constraints
# Note: If you see a FutureWarning about RAY_ACCEL_ENV_VAR_OVERRIDE_ON_ZERO, that's okay -
# it's just informing you about future Ray behavior changes and doesn't affect functionality.
ray.init(num_cpus=8, num_gpus=4)
# use airlines_small.parquet if you downloaded the small dataset
df = pd.read_parquet("data/airlines.parquet")
# Define the target label
label = "ArrDelayBinary"
# Prepare features and target
X = df.drop(columns=[label]) # All columns except the target
y = df[label] # Just the target column
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Store data in a dictionary to pass to training function
data_dict = {"X_train": X_train, "X_test": X_test, "y_train": y_train, "y_test": y_test}
Access Ray Dashboard: The dashboard is available at http://127.0.0.1:8265 on the Brev instance. To access it from your local machine, run in your local terminal:
If you haven’t already, make sure to run brev login in your terminal before executing the port-forward command below.
brev port-forward <your-instance-name> -p 8265:8265
Note
Before running the code below, make sure to modify the storage_path in the RunConfig to your desired location where Ray Tune results will be saved.
import os
# Define hyperparameter search space
search_space = {
"n_estimators": tune.grid_search([50, 100]),
"max_depth": tune.grid_search([20, 40]),
"max_features": tune.grid_search([0.5, 1.0]),
}
# Using default random search algorithm
tune_config = TuneConfig(
metric="accuracy",
mode="max",
)
run_config = RunConfig(
name="rf_hyperparameter_tuning_real_data",
storage_path=os.path.abspath("output/ray_results"),
)
# Create a trainable with resources
trainable = tune.with_resources(
tune.with_parameters(train_rf, data_dict=data_dict),
resources={"cpu": 2, "gpu": 1}, # Each trial uses 1 GPU and 2 CPUs
)
# Run the hyperparameter tuning
tuner = tune.Tuner(
trainable,
param_space=search_space,
tune_config=tune_config,
run_config=run_config,
)
results = tuner.fit()
# Get the best result
best_result = results.get_best_result(metric="accuracy", mode="max")
Dashboard action#
While the hyperparameter tuning is running, you should see activity on the nvdashboard in the notebook:

and if you check the Ray dashboard, on the cluster tab you’ll see:

When it completes you will notice that all the trials status are marked as TERMINATED, for the example above the whole HPO took ~13 min

Note
When running this notebook with a Conda environment, you may see messages like the following appear in your output while Ray hyperparameter trials are running:
(raylet) I0000 00:00:1770938640.198717 34590 chttp2_transport.cc:1182] ipv4:10.128.0.35:33125: Got goaway [2]
err=UNAVAILABLE:GOAWAY received; Error code: 2; Debug Text: Cancelling all calls {grpc_status:14, http2_error:2,
created_time:"2026-02-12T23:24:00.198711281+00:00"}
These types of messages can safely be ignored—they do not affect the end result of the notebook or the hyperparameter tuning process.
# Display results
print("Best hyperparameters found:")
print(f" n_estimators: {best_result.config['n_estimators']}")
print(f" max_depth: {best_result.config['max_depth']}")
print(f" max_features: {best_result.config['max_features']}")
print(f"Best test accuracy: {best_result.metrics['accuracy']:.4f}")
Best hyperparameters found:
n_estimators: 100
max_depth: 40
max_features: 0.5
Best test accuracy: 0.8855
Clean up Ray results directory#
import os
import shutil
ray_results_path = "output/ray_results"
if os.path.exists(ray_results_path):
print(f"Cleaning Ray results directory: {ray_results_path}")
shutil.rmtree(ray_results_path)
# Shutdown the Ray cluster
ray.shutdown()