Fluid.OpenVINO.GenAI 2025.2.0.1

dotnet add package Fluid.OpenVINO.GenAI --version 2025.2.0.1
                    
NuGet\Install-Package Fluid.OpenVINO.GenAI -Version 2025.2.0.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Fluid.OpenVINO.GenAI" Version="2025.2.0.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Fluid.OpenVINO.GenAI" Version="2025.2.0.1" />
                    
Directory.Packages.props
<PackageReference Include="Fluid.OpenVINO.GenAI" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Fluid.OpenVINO.GenAI --version 2025.2.0.1
                    
#r "nuget: Fluid.OpenVINO.GenAI, 2025.2.0.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Fluid.OpenVINO.GenAI@2025.2.0.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Fluid.OpenVINO.GenAI&version=2025.2.0.1
                    
Install as a Cake Addin
#tool nuget:?package=Fluid.OpenVINO.GenAI&version=2025.2.0.1
                    
Install as a Cake Tool

OpenVINO.NET

A comprehensive C# wrapper for OpenVINO and OpenVINO GenAI, providing idiomatic .NET APIs for AI inference and generative AI tasks.

**This is very much so in progress. Please do not use yet 😃. If this is something that might be helpful, leave an issue and let us know your usecase and we can try to incorporate as needed or else the main focus will be on Windows .NET apps **

Features

  • OpenVINO.NET.Core: Core OpenVINO functionality for model inference
  • OpenVINO.NET.GenAI: Generative AI capabilities including LLM pipelines
  • OpenVINO.NET.Native: Native library management and deployment
  • Modern C# API: Async/await, IAsyncEnumerable, SafeHandle resource management
  • Windows x64 Support: Optimized for Windows deployment scenarios

Requirements

  • .NET 8.0 or later
  • Windows x64
  • OpenVINO GenAI 2025.2.0.0 runtime

Quick Start

The easiest way to get started is with the QuickDemo application that automatically downloads a model:

By default the script downloads for ubuntu 24, if have another version, change it in the script

 scripts/download-openvino-runtime.sh 
 OPENVINO_RUNTIME_PATH=/home/brandon/OpenVINO.GenAI.NET/build/native/runtimes/linux-x64/native dotnet run --project samples/QuickDemo/ --configuration Release -- --device CPU

For Windwos

.\scripts\download-openvino-runtime.ps1
$env:OPENVINO_RUNTIME_PATH = "C:\Users\brand\code\OpenVINO.GenAI.NET\build\native\runtimes\win-x64\native"
dotnet run --project samples/QuickDemo/ --configuration Release -- --device CPU

Sample Output:

OpenVINO.NET Quick Demo
=======================
Model: Qwen3-0.6B-fp16-ov
Temperature: 0.7, Max Tokens: 100

✓ Model found at: ./Models/Qwen3-0.6B-fp16-ov
Device: CPU

Prompt 1: "Explain quantum computing in simple terms:"
Response: "Quantum computing is a revolutionary technology that uses quantum mechanics principles..."
Performance: 12.4 tokens/sec, First token: 450ms

Option 2: Code Integration

For integrating into your own applications:

using OpenVINO.NET.GenAI;

using var pipeline = new LLMPipeline("path/to/model", "CPU");
var config = GenerationConfig.Default.WithMaxTokens(100).WithTemperature(0.7f);

string result = await pipeline.GenerateAsync("Hello, world!", config);
Console.WriteLine(result);

Streaming Generation

using OpenVINO.NET.GenAI;

using var pipeline = new LLMPipeline("path/to/model", "CPU");
var config = GenerationConfig.Default.WithMaxTokens(100);

await foreach (var token in pipeline.GenerateStreamAsync("Tell me a story", config))
{
    Console.Write(token);
}

Projects

  • OpenVINO.NET.Core - Core OpenVINO wrapper
  • OpenVINO.NET.GenAI - GenAI functionality
  • OpenVINO.NET.Native - Native library management
  • QuickDemo - Quick start demo with automatic model download
  • TextGeneration.Sample - Basic text generation example
  • StreamingChat.Sample - Streaming chat application

Architecture

Three-Layer Design

┌─────────────────────────────────────────────────────────────┐
│                    Your Application                         │
└─────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────┐
│                OpenVINO.NET.GenAI                          │
│  • LLMPipeline (High-level API)                            │
│  • GenerationConfig (Fluent configuration)                 │
│  • ChatSession (Conversation management)                   │
│  • IAsyncEnumerable streaming                              │
└─────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────┐
│                OpenVINO.NET.Core                           │
│  • Core OpenVINO functionality                             │
│  • Model loading and inference                             │
└─────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────┐
│               OpenVINO.NET.Native                          │
│  • P/Invoke declarations                                    │
│  • SafeHandle resource management                          │
│  • MSBuild targets for DLL deployment                      │
└─────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────┐
│            OpenVINO GenAI C API                            │
│  • Native OpenVINO GenAI runtime                           │
│  • Version: 2025.2.0.0                                     │
└─────────────────────────────────────────────────────────────┘

Key Features

  • Memory Safe: SafeHandle pattern for automatic resource cleanup
  • Async/Await: Full async support with cancellation tokens
  • Streaming: Real-time token generation with IAsyncEnumerable<string>
  • Fluent API: Chainable configuration methods
  • Error Handling: Comprehensive exception handling and device fallbacks
  • Performance: Optimized for both throughput and latency

Installation

Prerequisites

  1. Install .NET 9.0 SDK or later

  2. Install OpenVINO GenAI Runtime 2025.2.0.0

Build from Source

# Clone the repository
git clone https://github.com/your-repo/OpenVINO-C-Sharp.git
cd OpenVINO-C-Sharp

# Build the solution
dotnet build OpenVINO.NET.sln

# Run the quick demo
dotnet run --project samples/QuickDemo

Performance Benchmarks

Expected Performance (Qwen3-0.6B-fp16-ov)

Device Tokens/Second First Token Latency Notes
CPU 12-15 400-600ms Always available
GPU 20-30 200-400ms Requires compatible GPU
NPU 15-25 300-500ms Intel NPU required

Benchmark Command

# Compare all available devices
dotnet run --project samples/QuickDemo -- --benchmark

Troubleshooting

Common Issues

1. "OpenVINO runtime not found"
Error: The specified module could not be found. (Exception from HRESULT: 0x8007007E)

Solution: Ensure OpenVINO GenAI runtime DLLs are in your PATH or application directory.

2. "Device not supported"
Error: Failed to create LLM pipeline on GPU: Device GPU is not supported

Solutions:

  • Check device availability: dotnet run --project samples/QuickDemo -- --benchmark
  • Use CPU fallback: dotnet run --project samples/QuickDemo -- --device CPU
  • Install appropriate drivers (Intel GPU driver for GPU support, Intel NPU driver for NPU)
3. "Model download fails"
Error: Failed to download model files from HuggingFace

Solutions:

  • Check internet connectivity
  • Verify HuggingFace is accessible
  • Manually download model files to ./Models/Qwen3-0.6B-fp16-ov/
4. "Out of memory during inference"
Error: Insufficient memory to load model

Solutions:

  • Use a smaller model
  • Reduce max_tokens parameter
  • Close other memory-intensive applications
  • Consider using INT4 quantized models

Debug Mode

Enable detailed logging by setting environment variable:

# Windows
set OPENVINO_LOG_LEVEL=DEBUG

# Linux/macOS
export OPENVINO_LOG_LEVEL=DEBUG

Contributing

Development Setup

  1. Install Prerequisites

    • Visual Studio 2022 or VS Code with C# extension
    • .NET 9.0 SDK
    • OpenVINO GenAI runtime
  2. Build and Test

    dotnet build OpenVINO.NET.sln
    dotnet test tests/OpenVINO.NET.GenAI.Tests/
    
  3. Code Style

    • Follow Microsoft C# coding conventions
    • Use async/await patterns
    • Implement proper resource disposal (using statements)
    • Add XML documentation for public APIs

Adding New Features

  1. P/Invoke Layer: Add native method declarations in GenAINativeMethods.cs
  2. SafeHandle: Create appropriate handle classes for resource management
  3. High-level API: Implement user-friendly wrapper classes
  4. Tests: Add comprehensive unit tests
  5. Documentation: Update README and XML docs

License

This project is licensed under the MIT License - see the LICENSE file for details.

Resources

Building

dotnet build OpenVINO.NET.sln
Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
2025.2.0.1 181 7/20/2025

Fluid.OpenVINO.GenAI 2025.2.0.0
     
     Community-maintained C# wrapper for OpenVINO GenAI.
     
     Requirements:
     - .NET 8.0 or later
     - Windows x64 or Linux x64
     - OpenVINO GenAI 2025.2.0.0 runtime
     
     Features:
     - Async/await and IAsyncEnumerable streaming
     - Fluent configuration API
     - Automatic native library management
     - SafeHandle resource management