RAPIDS Installation Guide

RAPIDS has several methods for installation, depending on the preferred environment and version. New Users should review the system and environment prerequisites.

Install RAPIDS with Release Selector

System Requirements

Environment Setup

Next Steps


Install RAPIDS

Use the selector tool below to select your preferred method, packages, and environment to install RAPIDS. Certain combinations may not be possible and are dimmed automatically.

Release
Arch
CUDA
Python
Method
Packages
Additional Packages
Packages
Image Location
Image OS
Image Type
Image Options
Command


Installation Troubleshooting

Conda Issues

The dependency solver takes too long or never resolves:
Update conda to use the new libmamba solver or use Mamba directly.


Docker Issues

Jupyter Lab is not accessible:
If the server has not started or needs to be restarted / stop, use the included start/stop script. Note this may change in the near future releases.


pip Issues

Infiniband is not supported yet.
These packages are not compatible with Tensorflow pip packages. Please use the NGC containers or conda packages instead.
If you experience a “Failed to import CuPy” error, please uninstall any existing versions of cupy and install cupy-cuda11x. For example:

pip uninstall cupy-cuda115; pip install cupy-cuda11x


The following error message indicates a problem with your environment:

ERROR: Could not find a version that satisfies the requirement cudf-cu11 (from versions: 0.0.1, 22.10.0)
ERROR: No matching distribution found for cudf-cu11

Check the suggestions below for possible resolutions:

  • The pip index has moved from the initial experimental release! Ensure the correct --extra-index-url=https://pypi.nvidia.com
  • Only Python versions 3.8, 3.9, or 3.10 are supported
  • RAPIDS pip packages require a recent version of pip that supports PEP600. Some users may need to update pip: pip install -U pip


Dask / Jupyter / Tornado 6.2 dependency conflicts can occur. Install jupyter-client 7.3.4 if the error below occurs:

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts.
    jupyter-client 7.4.2 requires tornado>=6.2, but you have tornado 6.1 which is incompatible.


WSL2 Issues

See the WSL2 setup troubleshooting section.


System Requirements

OS / GPU Driver / CUDA Versions

All provisioned systems need to be RAPIDS capable. Here’s what is required:

GPU: NVIDIA Pascal™ or better with compute capability 6.0+

OS: One of the following OS versions:

  • Ubuntu 20.04/22.04 or CentOS 7 / Rocky Linux 8 with gcc/++ 9.0+
  • Windows 11 using a WSL2 specific install
  • RHEL 7/8 support is provided through CentOS 7 / Rocky Linux 8 builds/installs

CUDA & NVIDIA Drivers: One of the following supported versions:

Note: RAPIDS is tested with and officially supports the versions listed above. Newer CUDA and driver versions may also work with RAPIDS. See CUDA compatibility for details.


System Recommendations

Aside from the system requirements, other considerations for best performance include:

  • SSD drive (NVMe preferred)
  • Approximately 2:1 ratio of system Memory to total GPU Memory (especially useful for Dask)
  • NVLink with 2 or more GPUs


Cloud Instance GPUs

If you do not have access to GPU hardware, there are several cloud service providers (CSP) that are RAPIDS enabled. Learn how to deploy RAPIDS on AWS, Azure, GCP, and IBM cloud on our Cloud Deployment Page.

Several services also offer free and limited trials with GPU resources:


Environment Setup

For most installations, you will need a Conda or Docker environments installed for RAPIDS. Note, these examples are structured for installing on Ubuntu. Please modify appropriately for CentOS / Rocky Linux. Windows 11 has a WSL2 specific install.


Conda

RAPIDS can use several versions of conda:

Below is a quick installation guide using miniconda.

1. Download and Run Install Script. Copy the command below to download and run the miniconda install script:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

2. Customize Conda and Run the Install. Use the terminal window to finish installation. Note, we recommend enabling conda-init.

3. Start Conda. Open a new terminal window, which should now show Conda initialized.


Docker

RAPIDS requires both Docker CE v19.03+ and nvidia-container-toolkit installed.

1. Download and Install. Copy command below to download and install the latest Docker CE Edition:

curl https://get.docker.com | sh

2. Install Latest NVIDIA Docker. Select the appropriate supported distribution:

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime

3. Start Docker. In new terminal window run:

sudo service docker stop
sudo service docker start

4a. Test NVIDIA Docker. In a terminal window run:

docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

4b. Legacy Docker Users. Docker CE v18 & nvidia-docker2 users will need to replace the following for compatibility: docker run --gpus all with docker run --runtime=nvidia



JupyterLab. Defaults will run JupyterLab on your host machine at port: 8888.

Running Multi-Node / Multi-GPU (MNMG) Environment. To start the container in an MNMG environment:

docker run -t -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack= 67108864 -v $PWD:/ws <container label>

The standard docker command may be sufficient, but the additional arguments ensures more stability. See the NCCL docs and UCX docs for more details on MNMG usage.

Start / Stop Jupyter Lab Notebooks. Either the standard single GPU or the modified MNMG Docker command above should auto-run a Jupyter Lab Notebook server. If it does not, or a restart is needed, run the following command within the Docker container to launch the notebook server:

bash /rapids/utils/start-jupyter.sh

If, for whatever reason, you need to shut down the Jupyter Lab server, use:

bash /rapids/utils/stop-jupyter.sh

Custom Datasets. See the RAPIDS Container README for more information about using custom datasets. Docker Hub and NVIDIA GPU Cloud host RAPIDS containers with a full list of available tags.


pip

Beginning with the release of 23.04: cuDF, dask-cuDF, cuML, cuGraph, RMM, and RAFT CUDA 11 pip packages are available on the NVIDIA Index.

pip Additional Prerequisites

glibc version: x86_64 wheels require glibc >= 2.17.
glibc version: ARM architecture (aarch64) wheels require glibc >= 2.31 (only ARM Server Base System Architecture is supported).


Windows WSL2

Windows users can now tap into GPU accelerated data science on their local machines using RAPIDS on Windows Subsystem for Linux 2. WSL2 is a Windows feature that enables users to run native Linux command line tools directly on Windows. Using this feature does not require a dual boot environment, removing complexity and saving you time.

WSL2 Additional Prerequisites

OS: Windows 11 with Ubuntu 22.04 instance for WSL2.
WSL Version: WSL2 (WSL1 not supported).
GPU: GPUs with Compute Capability 7.0 or higher (16GB+ GPU RAM is recommended).

Limitations

Only single GPU is supported.
GPU Direct Storage is not supported.

Troubleshooting

When installing with conda, if an http 000 connection error occurs when accessing the repository data, run wsl --shutdown and then restart the WSL instance.

When installing with Docker Desktop, if the container pull command is successful, but the run command hangs indefinitely, ensure you’re on Docker Desktop >= 4.18.


WSL2 Conda Install (Preferred Method)

  1. Install WSL2 and the Ubuntu 22.04 package using Microsoft’s instructions.
  2. Install the latest NVIDIA Drivers on the Windows host.
  3. Log in to the WSL2 Linux instance.
  4. Install Conda in the WSL2 Linux Instance using our Conda instructions.
  5. Install RAPIDS via Conda, using the RAPIDS Release Selector.
  6. Run this code to check that the RAPIDS installation is working:
     import cudf
     print(cudf.Series([1, 2, 3]))
    


WSL2 Docker Desktop Install

  1. Install WSL2 and the Ubuntu 22.04 package using Microsoft’s instructions.
  2. Install the latest NVIDIA Drivers on the Windows host.
  3. Install latest Docker Desktop for Windows
  4. Log in to the WSL2 Linux instance.
  5. Generate and run the RAPIDS docker pull and docker run commands based on your desired configuration using the RAPIDS Release Selector.
  6. Inside the Docker instance, run this code to check that the RAPIDS installation is working:
     import cudf
     print(cudf.Series([1, 2, 3]))
    


WSL2 pip Install

  1. Install WSL2 and the Ubuntu 22.04 package using Microsoft’s instructions.
  2. Install the latest NVIDIA Drivers on the Windows host.
  3. Log in to the WSL2 Linux instance.
  4. Follow this helpful developer guide and then install the CUDA Toolkit without drivers into the WSL2 instance. It’s important to execute sudo apt-get -y install cuda-toolkit instead of sudo apt-get -y install cuda to avoid installing a GPU driver into WSL2. The Windows host system provides the driver to WSL2.
  5. Install RAPIDS pip packages on the WSL2 Linux Instance using the release selector commands.
  6. Run this code to check that the RAPIDS installation is working:
     import cudf
     print(cudf.Series([1, 2, 3]))
    


Build from Source

To build from source, check each RAPIDS GitHub README, such as the cuDF’s source environment set up and build instructions. Further links are provided in the selector tool. If additional help is needed reach out on our Slack Channel.


Next Steps

After installing the RAPIDS libraries, the best place to get started is our User Guide. Our RAPIDS.ai home page also provides a great deal of information, as does our Blog Page and the NVIDIA Developer Blog. We are also always available on our RAPIDS GoAi Slack Channel.