Microsoft.ML.OnnxRuntime.Gpu.Windows
1.21.1
Prefix Reserved
Microsoft.ML.OnnxRuntime.Gpu.Windows 1.21.2
Additional Details1.21.1 has Cuda 11 dependencies. 1.21.2 should be used instead, which has Cuda 12 dependencies.
See the version list below for details.
dotnet add package Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.21.1
NuGet\Install-Package Microsoft.ML.OnnxRuntime.Gpu.Windows -Version 1.21.1
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.21.1" />
<PackageVersion Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" Version="1.21.1" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu.Windows" />
paket add Microsoft.ML.OnnxRuntime.Gpu.Windows --version 1.21.1
#r "nuget: Microsoft.ML.OnnxRuntime.Gpu.Windows, 1.21.1"
#:package Microsoft.ML.OnnxRuntime.Gpu.Windows@1.21.1
#addin nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.21.1
#tool nuget:?package=Microsoft.ML.OnnxRuntime.Gpu.Windows&version=1.21.1
About
ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.21.1)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.21.1)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.21.1)
NuGet packages (3)
Showing the top 3 NuGet packages that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows:
Package | Downloads |
---|---|
Microsoft.ML.OnnxRuntime.Gpu
This package contains native shared library artifacts for all supported platforms of ONNX Runtime. |
|
KokoroSharp.GPU.Windows
The Gpu.Windows runtime for KokoroSharp: an inference engine for Kokoro TTS with ONNX runtime, enabling fast and flexible local text-to-speech (fp/quanted) purely via C#. It features segment streaming, voice mixing, linear job scheduling, and optional playback. |
|
YoloSharpDeploGPU
使用onnx推理yolo模型(目前支持目标分类) |
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last Updated | |
---|---|---|---|
1.22.1 | 2,908 | 7/1/2025 | |
1.22.0 | 7,996 | 5/9/2025 | |
1.21.2 | 5,276 | 4/24/2025 | |
1.21.1 | 1,486 | 4/21/2025 | |
1.21.0 | 33,193 | 3/8/2025 | |
1.20.1 | 107,737 | 11/21/2024 | |
1.20.0 | 28,373 | 10/31/2024 | |
1.19.2 | 117,202 | 9/3/2024 | |
1.19.1 | 18,534 | 8/21/2024 | |
1.19.0 | 14,706 | 8/17/2024 | |
1.19.0-dev-20240812-1833-cc... | 1,778 | 8/13/2024 | |
1.18.1 | 37,943 | 6/27/2024 | |
1.18.0 | 26,881 | 5/17/2024 | |
1.17.3 | 46,765 | 4/10/2024 | |
1.17.1 | 39,057 | 2/25/2024 | |
1.17.0 | 32,820 | 1/31/2024 |
Release Def:
Branch: refs/heads/rel-1.21.1
Commit: 8f7cce3a49fdbdac96e0868b75b7d0159db7ac7f
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=756022