YoloDotNet.ExecutionProvider.Cuda 1.0.0

dotnet add package YoloDotNet.ExecutionProvider.Cuda --version 1.0.0
                    
NuGet\Install-Package YoloDotNet.ExecutionProvider.Cuda -Version 1.0.0
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="YoloDotNet.ExecutionProvider.Cuda" Version="1.0.0" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="YoloDotNet.ExecutionProvider.Cuda" Version="1.0.0" />
                    
Directory.Packages.props
<PackageReference Include="YoloDotNet.ExecutionProvider.Cuda" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add YoloDotNet.ExecutionProvider.Cuda --version 1.0.0
                    
#r "nuget: YoloDotNet.ExecutionProvider.Cuda, 1.0.0"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package YoloDotNet.ExecutionProvider.Cuda@1.0.0
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=YoloDotNet.ExecutionProvider.Cuda&version=1.0.0
                    
Install as a Cake Addin
#tool nuget:?package=YoloDotNet.ExecutionProvider.Cuda&version=1.0.0
                    
Install as a Cake Tool

Information

YoloDotNet uses modular execution providers to run inference on different hardware backends. Each provider targets a specific platform or accelerator and may require additional system-level dependencies such as runtimes, drivers, or SDKs.

Installing the NuGet package alone is not always sufficient — proper setup depends on the selected provider and the target system.
This document describes the installation, requirements, and usage of the CUDA & TensorRT execution provider.

Core Library Requirement

All execution providers require the core YoloDotNet package, which contains the shared inference pipeline, models, and APIs.

NuGet Package

dotnet add package YoloDotNet

Execution Provider - CUDA and TensorRT

The CUDA & TensorRT execution provider enables GPU-accelerated inference on NVIDIA GPUs using ONNX Runtime’s CUDA backend.
Optionally, NVIDIA TensorRT can be enabled to further optimize models for maximum throughput and ultra-low latency.

⚠️ Note
This execution provider is supported on Windows and Linux only.
CUDA and TensorRT are not available on macOS.

Requirements

Important
This execution provider depends on native CUDA and cuDNN libraries.
Installing the NuGet package alone is not sufficient — system-level dependencies must be installed correctly.

Installation (Windows)

  • CUDA

    Download and install the following from NVIDIA’s official websites:

    After installing cuDNN, locate the folder containing the cuDNN DLL files. This is typically:

    C:\Program Files\NVIDIA\CUDNN\v9.x\bin\v12.x
    

    (Replace v9.x and v12.x with the versions installed on your system)

    Add cuDNN to the System PATH

    1. Copy the full folder path to your cuDNN bin\v12.x folder

    2. Search Edit the system environment variables in Windows search and select it.

    3. Click Environment Variables.

    4. Under System variables, select Path and click Edit.

    5. Click New and paste the copied cuDNN path.

    6. Click OK to save and close all dialogs.

    7. Reboot your system.

  • TensorRT (optional)

    TensorRT is NVIDIA’s high-performance inference engine and can significantly improve performance by optimizing models for your specific GPU.

    1. Download the TensorRT 10.13.3 release for CUDA 12.x.

    2. Extract the archive to a folder on your system.

    3. Locate the lib folder inside the extracted TensorRT folder.

    4. Copy the full path to this lib folder.

    5. Add the path to your system's PATH environment variable (same process as described in the CUDA installation steps).

    6. Reboot your system.

Installation (Linux)

NuGet Package

dotnet add package YoloDotNet.ExecutionProvider.Cuda

Usage Example:

using YoloDotNet;
using YoloDotNet.ExecutionProvider.Cuda;

using var yolo = new Yolo(new YoloOptions
{
    ExecutionProvider = new CudaExecutionProvider(
        model: "path/to/model.onnx",

        // GPU device index (default: 0)
        gpuId: 0,

        // Optional TensorRT configuration for maximum performance
        trtConfig: new TensorRt
        {
            Precision = TrtPrecision.FP16,
            EngineCachePath = "path/to/cache/folder",
            EngineCachePrefix = "MyCachePrefix"
        }
    ),

    // ...other options
});

// See the TensorRT demo project for advanced configuration options.

Notes & Recommendations

  • Use CUDA only if you want simple GPU acceleration with minimal setup.
  • Enable TensorRT if you need maximum performance and are comfortable managing engine caches.
  • TensorRT engine generation happens once per model and configuration and is cached for subsequent runs.
  • CUDA and TensorRT are not supported on macOS.
Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.0.0 248 12/14/2025

This is the first standalone release of the CUDA execution provider for YoloDotNet following the introduction of the new modular architecture.

The CUDA execution provider enables GPU-accelerated inference using ONNX Runtime’s CUDA backend and supports optional NVIDIA TensorRT integration for maximum performance, lower latency, and optimized execution on supported NVIDIA GPUs.

This provider targets high-performance and real-time inference workloads on Windows and Linux systems and requires the CUDA Toolkit and cuDNN to be installed on the host system. It is fully compatible with the YoloDotNet core library and follows the new execution-provider-agnostic design.