Building RAPIDS containers from a custom base image#
This guide provides instructions to add RAPIDS and CUDA to your existing Docker images. This approach allows you to integrate RAPIDS libraries into containers that must start from a specific base image, such as application-specific containers.
The CUDA installation steps are sourced from the official NVIDIA CUDA Container Images Repository.
Warning
We strongly recommend that you use the official CUDA container images published by NVIDIA. This guide is intended for those extreme situations where you cannot use the CUDA images as the base and need to manually install CUDA components on your containers. This approach introduces significant complexity and potential issues that can be difficult to debug. We cannot provide support for users beyond what is on this page.
If you have the flexibility to choose your base image, see the Custom RAPIDS Docker Guide which starts from NVIDIA’s official CUDA images for a simpler setup.
Overview#
If you cannot use NVIDIA’s CUDA container images, you will need to manually install CUDA components in your existing Docker image. The components you need depends on the package manager used to install RAPIDS:
For conda installations: You need the components from the NVIDIA
base
CUDA imagesFor pip installations: You need the components from the NVIDIA
runtime
CUDA images
Understanding CUDA Image Components#
NVIDIA provides three tiers of CUDA container images, each building on the previous:
Base Components (Required for RAPIDS on conda)#
The base images provide the minimal CUDA runtime environment:
Component |
Package Name |
Purpose |
---|---|---|
CUDA Runtime |
|
Core CUDA runtime library ( |
CUDA Compatibility |
|
Forward compatibility libraries for older drivers |
Runtime Components (Required for RAPIDS on pip)#
The runtime images include all the base components plus additional CUDA packages such as:
Component |
Package Name |
Purpose |
---|---|---|
All Base Components |
(see above) |
Core CUDA runtime |
CUDA Libraries |
|
Comprehensive CUDA library collection |
CUDA Math Libraries |
|
Basic Linear Algebra Subprograms (BLAS) |
NVIDIA Performance Primitives |
|
Image, signal and video processing primitives |
Sparse Matrix Library |
|
Sparse matrix operations |
Profiling Tools |
|
NVIDIA Tools Extension for profiling |
Communication Library |
|
Multi-GPU and multi-node collective communications |
Development Components (Optional)#
The devel images add development tools to runtime images such as:
CUDA development headers and static libraries
CUDA compiler (
nvcc
)Debugger and profiler tools
Additional development utilities
Note
Development components are typically not needed for RAPIDS usage unless you plan to compile CUDA code within your container. For the complete and up to date list of runtime and devel components, see the respective Dockerfiles in the NVIDIA CUDA Container Images Repository.
Getting the Right Components for Your Setup#
The NVIDIA CUDA Container Images repository contains a dist/
directory with pre-built Dockerfiles organized by CUDA version, Linux distribution, and container type (base, runtime, devel).
Supported Distributions#
CUDA components are available for most popular Linux distributions. For the complete and current list of supported distributions for your desired version, check the repository linked above.
Key Differences by Distribution Type#
Ubuntu/Debian distributions:
Use
apt-get install
commandsRepository setup uses GPG keys and
.list
files
RHEL/CentOS/Rocky Linux distributions:
Use
yum install
ordnf install
commandsRepository setup uses
.repo
configuration filesInclude repository files:
cuda.repo-x86_64
,cuda.repo-arm64
Installing CUDA components on your container#
Navigate to
dist/{cuda_version}/{your_os}/base/
orruntime/
in the repositoryOpen the
Dockerfile
for your target distributionCopy all
ENV
variables for package versioning and NVIDIA Container Toolkit support (see the Essential Environment Variables section below)Copy the
RUN
commands for installing the packagesIf you are using the
runtime
components, make sure to copy theENV
andRUN
commands from thebase
Dockerfile as wellFor RHEL-based systems, also copy any
.repo
configuration files needed
Note
Package versions change between CUDA releases. Always check the specific Dockerfile for your desired CUDA version and distribution to get the correct versions.
Installing RAPIDS libraries on your container#
Refer to the Docker Templates in the Custom RAPIDS Docker Guide to configure your RAPIDS installation, adding the conda or pip installation commands after the CUDA components are installed.
Essential Environment Variables#
These environment variables are required when building CUDA containers, as they control GPU access and CUDA functionality through the NVIDIA Container Toolkit
Variable |
Purpose |
---|---|
|
Specifies which GPUs are visible |
|
Required driver capabilities |
|
Driver version constraints |
|
Include CUDA binaries |
|
Include CUDA libraries |
Complete Integration Examples#
Here are complete examples showing how to build a RAPIDS container with CUDA 12.9.1 components on an Ubuntu 24.04 base image:
Tip
These examples must be built with Docker v28+.
RAPIDS with conda (Base Components)
Create an env.yaml
file alongside your Dockerfile with your desired RAPIDS packages following the configuration described in the Custom RAPIDS Docker Guide. Set the TARGETARCH
build argument to match your target architecture (amd64
for x86_64 or arm64
for ARM processors).
FROM ubuntu:24.04
# Build arguments
ARG TARGETARCH=amd64
# Architecture detection and setup
ENV NVARCH=${TARGETARCH/amd64/x86_64}
ENV NVARCH=${NVARCH/arm64/sbsa}
SHELL ["/bin/bash", "-euo", "pipefail", "-c"]
# NVIDIA Repository Setup (Ubuntu 24.04)
RUN apt-get update && apt-get install -y --no-install-recommends \
gnupg2 curl ca-certificates && \
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/${NVARCH}/3bf863cc.pub | apt-key add - && \
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/${NVARCH} /" > /etc/apt/sources.list.d/cuda.list && \
apt-get purge --autoremove -y curl && \
rm -rf /var/lib/apt/lists/*
# CUDA Base Package Versions (from CUDA 12.9.1 base image)
ENV NV_CUDA_CUDART_VERSION=12.9.79-1
ENV CUDA_VERSION=12.9.1
# NVIDIA driver constraints
ENV NVIDIA_REQUIRE_CUDA="cuda>=12.9 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 brand=unknown,driver>=570,driver<571 brand=grid,driver>=570,driver<571 brand=tesla,driver>=570,driver<571 brand=nvidia,driver>=570,driver<571 brand=quadro,driver>=570,driver<571 brand=quadrortx,driver>=570,driver<571 brand=nvidiartx,driver>=570,driver<571 brand=vapps,driver>=570,driver<571 brand=vpc,driver>=570,driver<571 brand=vcs,driver>=570,driver<571 brand=vws,driver>=570,driver<571 brand=cloudgaming,driver>=570,driver<571"
# Install Base CUDA Components (from base image)
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-cudart-12-9=${NV_CUDA_CUDART_VERSION} \
cuda-compat-12-9 && \
rm -rf /var/lib/apt/lists/*
# CUDA Environment Configuration
ENV PATH=/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64
# NVIDIA Container Runtime Configuration
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
# Required for nvidia-docker v1
RUN echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
wget \
curl \
git \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Install Miniforge
RUN wget -qO /tmp/miniforge.sh "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh" && \
bash /tmp/miniforge.sh -b -p /opt/conda && \
rm /tmp/miniforge.sh && \
/opt/conda/bin/conda clean --all --yes
# Add conda to PATH and activate base environment
ENV PATH="/opt/conda/bin:${PATH}"
ENV CONDA_DEFAULT_ENV=base
ENV CONDA_PREFIX=/opt/conda
# Create conda group and rapids user
RUN groupadd -g 1001 conda && \
useradd -rm -d /home/rapids -s /bin/bash -g conda -u 1001 rapids && \
chown -R rapids:conda /opt/conda
USER rapids
WORKDIR /home/rapids
# Copy the environment file template
COPY --chmod=644 env.yaml /home/rapids/env.yaml
# Update the base environment with user's packages from env.yaml
# Note: The -n base flag ensures packages are installed to the base environment
# overriding any 'name:' specified in the env.yaml file
RUN /opt/conda/bin/conda env update -n base -f env.yaml && \
/opt/conda/bin/conda clean --all --yes
CMD ["bash"]
RAPIDS with pip (Runtime Components)
Create a requirements.txt
file alongside your Dockerfile with your desired RAPIDS packages following the configuration described in the Custom RAPIDS Docker Guide. Set the TARGETARCH
build argument to match your target architecture (amd64
for x86_64 or arm64
for ARM processors). You can also customize the Python version by changing the PYTHON_VER
build argument.
FROM ubuntu:24.04
# Build arguments
ARG PYTHON_VER=3.12
ARG TARGETARCH=amd64
# Architecture detection and setup
ENV NVARCH=${TARGETARCH/amd64/x86_64}
ENV NVARCH=${NVARCH/arm64/sbsa}
SHELL ["/bin/bash", "-euo", "pipefail", "-c"]
# NVIDIA Repository Setup (Ubuntu 24.04)
RUN apt-get update && apt-get install -y --no-install-recommends \
gnupg2 curl ca-certificates && \
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/${NVARCH}/3bf863cc.pub | apt-key add - && \
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/${NVARCH} /" > /etc/apt/sources.list.d/cuda.list && \
apt-get purge --autoremove -y curl && \
rm -rf /var/lib/apt/lists/*
# CUDA Package Versions (from CUDA 12.9.1 base and runtime images)
ENV NV_CUDA_CUDART_VERSION=12.9.79-1
ENV NV_CUDA_LIB_VERSION=12.9.1-1
ENV NV_NVTX_VERSION=12.9.79-1
ENV NV_LIBNPP_VERSION=12.4.1.87-1
ENV NV_LIBNPP_PACKAGE=libnpp-12-9=${NV_LIBNPP_VERSION}
ENV NV_LIBCUSPARSE_VERSION=12.5.10.65-1
ENV NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-9
ENV NV_LIBCUBLAS_VERSION=12.9.1.4-1
ENV NV_LIBCUBLAS_PACKAGE=${NV_LIBCUBLAS_PACKAGE_NAME}=${NV_LIBCUBLAS_VERSION}
ENV NV_LIBNCCL_PACKAGE_NAME=libnccl2
ENV NV_LIBNCCL_PACKAGE_VERSION=2.27.3-1
ENV NCCL_VERSION=2.27.3-1
ENV NV_LIBNCCL_PACKAGE=${NV_LIBNCCL_PACKAGE_NAME}=${NV_LIBNCCL_PACKAGE_VERSION}+cuda12.9
ENV CUDA_VERSION=12.9.1
# NVIDIA driver constraints
ENV NVIDIA_REQUIRE_CUDA="cuda>=12.9 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=565,driver<566 brand=grid,driver>=565,driver<566 brand=tesla,driver>=565,driver<566 brand=nvidia,driver>=565,driver<566 brand=quadro,driver>=565,driver<566 brand=quadrortx,driver>=565,driver<566 brand=nvidiartx,driver>=565,driver<566 brand=vapps,driver>=565,driver<566 brand=vpc,driver>=565,driver<566 brand=vcs,driver>=565,driver<566 brand=vws,driver>=565,driver<566 brand=cloudgaming,driver>=565,driver<566 brand=unknown,driver>=570,driver<571 brand=grid,driver>=570,driver<571 brand=tesla,driver>=570,driver<571 brand=nvidia,driver>=570,driver<571 brand=quadro,driver>=570,driver<571 brand=quadrortx,driver>=570,driver<571 brand=nvidiartx,driver>=570,driver<571 brand=vapps,driver>=570,driver<571 brand=vpc,driver>=570,driver<571 brand=vcs,driver>=570,driver<571 brand=vws,driver>=570,driver<571 brand=cloudgaming,driver>=570,driver<571"
# Install Base CUDA Components
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-cudart-12-9=${NV_CUDA_CUDART_VERSION} \
cuda-compat-12-9 && \
rm -rf /var/lib/apt/lists/*
# Install Runtime CUDA Components
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-libraries-12-9=${NV_CUDA_LIB_VERSION} \
${NV_LIBNPP_PACKAGE} \
cuda-nvtx-12-9=${NV_NVTX_VERSION} \
libcusparse-12-9=${NV_LIBCUSPARSE_VERSION} \
${NV_LIBCUBLAS_PACKAGE} \
${NV_LIBNCCL_PACKAGE} && \
rm -rf /var/lib/apt/lists/*
# Keep apt from auto upgrading the cublas and nccl packages
RUN apt-mark hold ${NV_LIBCUBLAS_PACKAGE_NAME} ${NV_LIBNCCL_PACKAGE_NAME}
# CUDA Environment Configuration
ENV PATH=/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64
# NVIDIA Container Runtime Configuration
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
# Required for nvidia-docker v1
RUN echo "/usr/local/cuda/lib64" >> /etc/ld.so.conf.d/nvidia.conf
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python${PYTHON_VER} \
python${PYTHON_VER}-venv \
python3-pip \
wget \
curl \
git \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Create symbolic links for python and pip
RUN ln -sf /usr/bin/python${PYTHON_VER} /usr/bin/python && \
ln -sf /usr/bin/python${PYTHON_VER} /usr/bin/python3
# Create rapids user
RUN groupadd -g 1001 rapids && \
useradd -rm -d /home/rapids -s /bin/bash -g rapids -u 1001 rapids
USER rapids
WORKDIR /home/rapids
# Create and activate virtual environment
RUN python -m venv /home/rapids/venv
ENV PATH="/home/rapids/venv/bin:$PATH"
ENV VIRTUAL_ENV="/home/rapids/venv"
# Upgrade pip
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
# Copy the requirements file
COPY --chmod=644 requirements.txt /home/rapids/requirements.txt
# Install all packages
RUN pip install --no-cache-dir -r requirements.txt
CMD ["bash"]
Verifying Your Installation#
After starting your container, you can quickly test that RAPIDS is installed and running correctly. The container launches directly into a bash
shell where you can install the RAPIDS CLI command line utility to verify your installation.
Run the Container Interactively
This command starts your container and drops you directly into a bash shell.
# Build the conda-based container (requires env.yaml in build context) docker build -f conda-rapids.Dockerfile -t rapids-conda-cuda . # Build the pip-based container (requires requirements.txt in build context) docker build -f pip-rapids.Dockerfile -t rapids-pip-cuda . # Run conda container with GPU access docker run --gpus all -it rapids-conda-cuda # Run pip container with GPU access docker run --gpus all -it rapids-pip-cuda
Install RAPIDS CLI
Inside the containers, install the RAPIDS CLI:
pip install rapids-cli
Test the installation using the Doctor subcommand
Once RAPIDS CLI is installed, you can use the
rapids doctor
subcommand to perform health checks.rapids doctor
Expected Output
If your installation is successful, you will see output similar to this:
🧑⚕️ Performing REQUIRED health check for RAPIDS Running checks All checks passed!
For more RAPIDS on Docker, see the Custom RAPIDS Docker Guide and the RAPIDS installation guide.