L4 GPUs on a Google Cloud Platform (GCP)#

L4 GPUs are a more energy and computationally efficient option compared to T4 GPUs. L4 GPUs are generally available on GCP to run your workflows with RAPIDS.

Compute Engine Instance#

Create the Virtual Machine#

To create a VM instance with an L4 GPU to run RAPIDS:

  1. Open Compute Engine.

  2. Select Create Instance.

  3. Under the Machine configuration section, select GPUs and then select NVIDIA L4 in the GPU type dropdown.

  4. Under the Boot Disk section, click CHANGE and select Deep Learning on Linux in the Operating System dropdown.

  5. It is also recommended to increase the default boot disk size to something like 100GB.

  6. Once you have customized other attributes of the instance, click CREATE.

Allow network access#

To access Jupyter and Dask we will need to set up some firewall rules to open up some ports.

Create the firewall rule#

  1. Open VPC Network.

  2. Select Firewall and Create firewall rule

  3. Give the rule a name like rapids and ensure the network matches the one you selected for the VM.

  4. Add a tag like rapids which we will use to assign the rule to our VM.

  5. Set your source IP range. We recommend you restrict this to your own IP address or your corporate network rather than 0.0.0.0/0 which will allow anyone to access your VM.

  6. Under Protocols and ports allow TCP connections on ports 22,8786,8787,8888.

Assign it to the VM#

  1. Open Compute Engine.

  2. Select your VM and press Edit.

  3. Scroll down to Networking and add the rapids network tag you gave your firewall rule.

  4. Select Save.

Connect to the VM#

Next we need to connect to the VM.

  1. Open Compute Engine.

  2. Locate your VM and press the SSH button which will open a new browser tab with a terminal.

Install CUDA and NVIDIA Container Toolkit#

Since GCP recommends CUDA 12 on L4 VM, we will be upgrading CUDA.

  1. Install CUDA Toolkit 12 in your VM and accept the default prompts with the following commands.

$ wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
$ sudo sh cuda_12.1.1_530.30.02_linux.run
  1. Install NVIDIA Container Toolkit with the following commands.

$ sudo apt-get update
$ sudo apt-get install -y nvidia-container-toolkit
$ sudo nvidia-ctk runtime configure --runtime=docker
$ sudo systemctl restart docker

Install RAPIDS#

There are a selection of methods you can use to install RAPIDS which you can see via the RAPIDS release selector.

For this example we are going to run the RAPIDS Docker container so we need to know the name of the most recent container. On the release selector choose Docker in the Method column.

Then copy the commands shown:

docker pull rapidsai/notebooks:24.12a-cuda12.5-py3.12
docker run --gpus all --rm -it \
    --shm-size=1g --ulimit memlock=-1 \
    -p 8888:8888 -p 8787:8787 -p 8786:8786 \
    rapidsai/notebooks:24.12a-cuda12.5-py3.12

Note

If you see a “docker socket permission denied” error while running these commands try closing and reconnecting your SSH window. This happens because your user was added to the docker group only after you signed in.

Test RAPIDS#

To access Jupyter, navigate to <VM ip>:8888 in the browser.

In a Python notebook, check that you can import and use RAPIDS libraries like cudf.

In [1]: import cudf
In [2]: df = cudf.datasets.timeseries()
In [3]: df.head()
Out[3]:
                       id     name         x         y
timestamp
2000-01-01 00:00:00  1020    Kevin  0.091536  0.664482
2000-01-01 00:00:01   974    Frank  0.683788 -0.467281
2000-01-01 00:00:02  1000  Charlie  0.419740 -0.796866
2000-01-01 00:00:03  1019    Edith  0.488411  0.731661
2000-01-01 00:00:04   998    Quinn  0.651381 -0.525398

Open cudf/10min.ipynb and execute the cells to explore more of how cudf works.

When running a Dask cluster you can also visit <VM ip>:8787 to monitor the Dask cluster status.

Clean up#

Once you are finished head back to the Deployments page and delete the instance you created.