Intel.ML.OnnxRuntime.OpenVino 1.24.1

Prefix Reserved
dotnet add package Intel.ML.OnnxRuntime.OpenVino --version 1.24.1
                    
NuGet\Install-Package Intel.ML.OnnxRuntime.OpenVino -Version 1.24.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Intel.ML.OnnxRuntime.OpenVino" Version="1.24.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Intel.ML.OnnxRuntime.OpenVino" Version="1.24.1" />
                    
Directory.Packages.props
<PackageReference Include="Intel.ML.OnnxRuntime.OpenVino" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Intel.ML.OnnxRuntime.OpenVino --version 1.24.1
                    
#r "nuget: Intel.ML.OnnxRuntime.OpenVino, 1.24.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Intel.ML.OnnxRuntime.OpenVino@1.24.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Intel.ML.OnnxRuntime.OpenVino&version=1.24.1
                    
Install as a Cake Addin
#tool nuget:?package=Intel.ML.OnnxRuntime.OpenVino&version=1.24.1
                    
Install as a Cake Tool

About

ONNX Runtime Logo

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.

ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.

Learn more → here

NuGet Packages

ONNX Runtime Native packages

Microsoft.ML.OnnxRuntime
Microsoft.ML.OnnxRuntime.Gpu
Microsoft.ML.OnnxRuntime.DirectML
Microsoft.ML.OnnxRuntime.QNN
Intel.ML.OnnxRuntime.OpenVino

Other packages

Microsoft.ML.OnnxRuntime.Managed
  • C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
There are no supported framework assets in this package.

Learn more about Target Frameworks and .NET Standard.

NuGet packages (2)

Showing the top 2 NuGet packages that depend on Intel.ML.OnnxRuntime.OpenVino:

Package Downloads
YoloDotNet.ExecutionProvider.OpenVino

YoloDotNet OpenVINO Execution Provider enables optimized inference using Intel® OpenVINO™ on supported Intel CPUs, integrated GPUs, and accelerators. This execution provider integrates ONNX Runtime with Intel OpenVINO to deliver high-performance, low-latency inference on Intel hardware across Windows and Linux. It is ideal for CPU-focused deployments, edge systems, and environments where Intel hardware acceleration is preferred over CUDA-based solutions. The provider is fully modular and designed to work with the execution-provider-agnostic YoloDotNet core library introduced in v4.0. Only one execution provider should be referenced per project.

ElBruno.LocalEmbeddings.Npu.Intel

Intel Core Ultra NPU-accelerated local embedding generation using OpenVINO and ONNX Runtime

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.24.1 991 2/26/2026
1.23.0 3,352 10/15/2025
1.22.0 2,963 5/26/2025 1.22.0 is deprecated because it is no longer maintained.
1.21.0 747 4/9/2025
1.20.0 3,387 12/10/2024