docs: 📝 Update Docker commands to use NVIDIA runtime for GPU support (#22052)

Signed-off-by: Onuralp SEZER <onuralp@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Onuralp SEZER 2025-09-12 13:07:50 +03:00 committed by GitHub
parent 6e42c8a66c
commit 9394adfca4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
6 changed files with 98 additions and 32 deletions

View File

@ -98,8 +98,8 @@ Run the image:
```bash
# Run the Ultralytics image in a container with GPU support
sudo docker run -it --ipc=host --gpus all $t # all GPUs
sudo docker run -it --ipc=host --gpus '"device=2,3"' $t # specify GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus all $t # all GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus '"device=2,3"' $t # specify GPUs
```
## Speeding Up Installation with Libmamba
@ -170,7 +170,8 @@ Using Ultralytics Docker images ensures a consistent and reproducible environmen
```bash
sudo docker pull ultralytics/ultralytics:latest-conda
sudo docker run -it --ipc=host --gpus all ultralytics/ultralytics:latest-conda
sudo docker run -it --ipc=host --runtime=nvidia --gpus all ultralytics/ultralytics:latest-conda # all GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus '"device=2,3"' ultralytics/ultralytics:latest-conda # specify GPUs
```
This approach is ideal for deploying applications in production or running complex workflows without manual configuration. Learn more about [Ultralytics Conda Docker Image](../quickstart.md).

View File

@ -308,7 +308,7 @@ Using Ultralytics Docker images ensures a consistent environment across differen
First, ensure that the [NVIDIA Container Toolkit](#installing-nvidia-container-toolkit) is installed and configured. Then, use the following command to run Ultralytics YOLO with GPU support:
```bash
sudo docker run -it --ipc=host --runtime=nvidia --gpus all ultralytics/ultralytics:latest
sudo docker run -it --ipc=host --runtime=nvidia --gpus all ultralytics/ultralytics:latest # all GPUs
```
This command sets up a Docker container with GPU access. For additional details, see the Docker Quickstart Guide.

View File

@ -161,7 +161,7 @@ subprocess.call(f"docker pull {tag}", shell=True)
# Run the Triton server and capture the container ID
container_id = (
subprocess.check_output(
f"docker run -d --rm --gpus 0 -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
f"docker run -d --rm --runtime=nvidia --gpus 0 -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
shell=True,
)
.decode("utf-8")
@ -277,7 +277,7 @@ Setting up [Ultralytics YOLO11](../models/yolo11.md) with [NVIDIA Triton Inferen
container_id = (
subprocess.check_output(
f"docker run -d --rm --gpus 0 -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
f"docker run -d --rm --runtime=nvidia --gpus 0 -v {triton_repo_path}:/models -p 8000:8000 {tag} tritonserver --model-repository=/models",
shell=True,
)
.decode("utf-8")

View File

@ -76,8 +76,8 @@ Ultralytics offers a variety of installation methods, including pip, conda, and
sudo docker pull $t
# Run the ultralytics image in a container with GPU support
sudo docker run -it --ipc=host --gpus all $t # all GPUs
sudo docker run -it --ipc=host --gpus '"device=2,3"' $t # specify GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus all $t # all GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus '"device=2,3"' $t # specify GPUs
```
=== "Git clone"
@ -122,8 +122,8 @@ Ultralytics offers a variety of installation methods, including pip, conda, and
sudo docker pull $t
# Run the ultralytics image in a container with GPU support
sudo docker run -it --ipc=host --gpus all $t # all GPUs
sudo docker run -it --ipc=host --gpus '"device=2,3"' $t # specify GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus all $t # all GPUs
sudo docker run -it --ipc=host --runtime=nvidia --gpus '"device=2,3"' $t # specify GPUs
```
The above command initializes a Docker container with the latest `ultralytics` image. The `-it` flags assign a pseudo-TTY and keep stdin open, allowing interaction with the container. The `--ipc=host` flag sets the IPC (Inter-Process Communication) namespace to the host, which is essential for sharing memory between processes. The `--gpus all` flag enables access to all available GPUs inside the container, crucial for tasks requiring GPU computation.
@ -541,7 +541,7 @@ Docker provides an isolated, consistent environment for Ultralytics YOLO, ensuri
sudo docker pull ultralytics/ultralytics:latest
# Run the ultralytics image in a container with GPU support
sudo docker run -it --ipc=host --gpus all ultralytics/ultralytics:latest
sudo docker run -it --ipc=host --runtime=nvidia --gpus all ultralytics/ultralytics:latest
```
For detailed Docker instructions, see the [Docker quickstart guide](guides/docker-quickstart.md).

View File

@ -28,27 +28,92 @@ nvidia-smi
This command should display information about your GPU(s) and the installed driver version.
Next, install the NVIDIA Container Toolkit. The commands below are typical for Debian-based systems like Ubuntu, but refer to the official guide linked above for instructions specific to your distribution:
Next, install the NVIDIA Container Toolkit. The commands below are typical for Debian-based systems like Ubuntu and RHEL-based systems like Fedora/CentOS, but refer to the official guide linked above for instructions specific to your distribution:
```bash
# Add NVIDIA package repositories (refer to official guide for latest setup)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
=== "Ubuntu/Debian"
# Update package list and install the toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
Update the package lists and install the nvidia-container-toolkit package:
# Configure Docker to use the NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
```bash
sudo apt-get update
```
# Restart Docker service to apply changes
sudo systemctl restart docker
```
Install Latest version of nvidia-container-toolkit
Finally, verify that the NVIDIA runtime is configured and available to Docker:
```bash
sudo apt-get install -y nvidia-container-toolkit \
nvidia-container-toolkit-base libnvidia-container-tools \
libnvidia-container1
```
??? info "Optional: Install specific version of nvidia-container-toolkit"
Optionally, you can install a specific version of the nvidia-container-toolkit by setting the `NVIDIA_CONTAINER_TOOLKIT_VERSION` environment variable:
```bash
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo apt-get install -y \
nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
```
```bash
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
```
=== "RHEL/CentOS/Fedora/Amazon Linux"
```bash
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
```
Update the package lists and install the nvidia-container-toolkit package:
```bash
sudo dnf clean expire-cache
sudo dnf check-update
```
```bash
sudo dnf install \
nvidia-container-toolkit \
nvidia-container-toolkit-base \
libnvidia-container-tools \
libnvidia-container1
```
??? info "Optional: Install specific version of nvidia-container-toolkit"
Optionally, you can install a specific version of the nvidia-container-toolkit by setting the `NVIDIA_CONTAINER_TOOLKIT_VERSION` environment variable:
```bash
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo dnf install -y \
nvidia-container-toolkit-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1-${NVIDIA_CONTAINER_TOOLKIT_VERSION}
```
```bash
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
```
### Verify NVIDIA Runtime with Docker
Run `docker info | grep -i runtime` to ensure that `nvidia` appears in the list of runtimes:
```bash
docker info | grep -i runtime
@ -80,7 +145,7 @@ To run an interactive container instance using only the CPU, use the `-it` flag.
```bash
# Run an interactive container instance using CPU
sudo docker run -it --ipc=host $t
sudo docker run -it --runtime=nvidia --ipc=host $t
```
### Using GPU
@ -89,10 +154,10 @@ To enable GPU access within the container, use the `--gpus` flag. This requires
```bash
# Run with access to all available GPUs
sudo docker run -it --ipc=host --gpus all $t
sudo docker run -it --runtime=nvidia --ipc=host --gpus all $t
# Run with access to specific GPUs (e.g., GPUs 2 and 3)
sudo docker run -it --ipc=host --gpus '"device=2,3"' $t
sudo docker run -it --runtime=nvidia --ipc=host --gpus '"device=2,3"' $t
```
Refer to the [Docker run reference](https://docs.docker.com/engine/containers/run/) for more details on command options.
@ -103,7 +168,7 @@ To work with your local files (datasets, model weights, etc.) inside the contain
```bash
# Mount /path/on/host (your local machine) to /path/in/container (inside the container)
sudo docker run -it --ipc=host --gpus all -v /path/on/host:/path/in/container $t
sudo docker run -it --runtime=nvidia --ipc=host --gpus all -v /path/on/host:/path/in/container $t
```
Replace `/path/on/host` with the actual path on your machine and `/path/in/container` with the desired path inside the Docker container (e.g., `/usr/src/datasets`).

View File

@ -134,7 +134,7 @@ DDP profiling results on an [AWS EC2 P4d instance](../environments/aws_quickstar
```bash
# prepare
t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/coco:/usr/src/coco $t
t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --runtime=nvidia --ipc=host --gpus all -v "$(pwd)"/coco:/usr/src/coco $t
pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
cd .. && rm -rf app && git clone https://github.com/ultralytics/yolov5 -b master app && cd app
cp data/coco.yaml data/coco_profile.yaml