Intel.ML.OnnxRuntime.OpenVino
1.24.1
Prefix Reserved
dotnet add package Intel.ML.OnnxRuntime.OpenVino --version 1.24.1
NuGet\Install-Package Intel.ML.OnnxRuntime.OpenVino -Version 1.24.1
<PackageReference Include="Intel.ML.OnnxRuntime.OpenVino" Version="1.24.1" />
<PackageVersion Include="Intel.ML.OnnxRuntime.OpenVino" Version="1.24.1" />
<PackageReference Include="Intel.ML.OnnxRuntime.OpenVino" />
paket add Intel.ML.OnnxRuntime.OpenVino --version 1.24.1
#r "nuget: Intel.ML.OnnxRuntime.OpenVino, 1.24.1"
#:package Intel.ML.OnnxRuntime.OpenVino@1.24.1
#addin nuget:?package=Intel.ML.OnnxRuntime.OpenVino&version=1.24.1
#tool nuget:?package=Intel.ML.OnnxRuntime.OpenVino&version=1.24.1
About

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Intel.ML.OnnxRuntime.OpenVino
- 64-bit Windows
- OpenVINO Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Learn more about Target Frameworks and .NET Standard.
-
.NETCoreApp 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.24.1)
-
.NETFramework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.24.1)
-
.NETStandard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.24.1)
NuGet packages (1)
Showing the top 1 NuGet packages that depend on Intel.ML.OnnxRuntime.OpenVino:
| Package | Downloads |
|---|---|
|
YoloDotNet.ExecutionProvider.OpenVino
YoloDotNet OpenVINO Execution Provider enables optimized inference using Intel® OpenVINO™ on supported Intel CPUs, integrated GPUs, and accelerators. This execution provider integrates ONNX Runtime with Intel OpenVINO to deliver high-performance, low-latency inference on Intel hardware across Windows and Linux. It is ideal for CPU-focused deployments, edge systems, and environments where Intel hardware acceleration is preferred over CUDA-based solutions. The provider is fully modular and designed to work with the execution-provider-agnostic YoloDotNet core library introduced in v4.0. Only one execution provider should be referenced per project. |
GitHub repositories
This package is not used by any popular GitHub repositories.
Release Def:
Branch:
Commit:
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=