Microsoft.ML.OnnxRuntime.Gpu.Windows
1.26.0
Prefix Reserved
dotnet add package Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.26.0
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu.Windows -Version 1.26.0
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.26.0" />
<PackageVersion Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.26.0" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" />
paket add Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.26.0
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu.Windows, 1.26.0"
#:package Microsoft.ML.OnnxRuntime.Gpu.Windows@1.26.0
#addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.26.0
#tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.26.0
About

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.26.0)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.26.0)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.26.0)
NuGet packages (4)
Showing the top 4 NuGet packages that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows:
| Package | Downloads |
|---|---|
|
Microsoft.ML.OnnxRuntime.Gpu
This package contains native shared library artifacts for all supported platforms of ONNX Runtime. |
|
|
KokoroSharp.GPU.Windows
The Gpu.Windows runtime for KokoroSharp: an inference engine for Kokoro TTS with ONNX runtime, enabling fast and flexible local text-to-speech (fp/quanted) purely via C#. It features segment streaming, voice mixing, linear job scheduling, and optional playback. |
|
|
TensorStack.Providers.CUDA
CUDA GPU backend for ONNX tensor computation. |
|
|
YoloSharpDeploGPU
使用onnx推理yolo模型(目前支持目标分类) |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows:
| Repository | Stars |
|---|---|
|
Lyrcaxis/KokoroSharp
Fast local TTS inference engine in C# with ONNX runtime. Multi-speaker, multi-platform and multilingual. Integrate on your .NET projects using a plug-and-play NuGet package, complete with all voices.
|
| Version | Downloads | Last Updated | |
|---|---|---|---|
| 1.26.0 | 156 | 5/8/2026 | |
| 1.25.1 | 1,801 | 4/27/2026 | |
| 1.25.0 | 657 | 4/24/2026 | |
| 1.25.0-rc.1 | 58 | 4/24/2026 | |
| 1.24.4 | 8,554 | 3/20/2026 | |
| 1.24.3 | 4,959 | 3/5/2026 | |
| 1.24.2 | 3,633 | 2/19/2026 | |
| 1.24.1 | 4,921 | 2/5/2026 | |
| 1.24.0-rc.2 | 107 | 1/28/2026 | |
| 1.23.2 | 148,480 | 10/25/2025 | |
| 1.23.1 | 6,118 | 10/8/2025 | |
| 1.23.0 | 4,617 | 9/26/2025 | |
| 1.23.0-rc.2 | 229 | 9/21/2025 | |
| 1.22.1 | 54,041 | 7/1/2025 | |
| 1.22.0 | 21,958 | 5/9/2025 | |
| 1.21.2 | 5,886 | 4/24/2025 | |
| 1.21.1 | 4,966 | 4/21/2025 | |
| 1.21.0 | 43,806 | 3/8/2025 | |
| 1.20.1 | 132,257 | 11/21/2024 | |
| 1.20.0 | 30,169 | 10/31/2024 |
Release Def:
Branch: refs/heads/rel-1.26.0
Commit: 8c546c37b43caaca1fa25db430dab94b901cf277
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=1209449