elbruno.LocalEmbeddings 1.0.2

There is a newer version of this package available.
See the version list below for details.
dotnet add package elbruno.LocalEmbeddings --version 1.0.2
                    
NuGet\Install-Package elbruno.LocalEmbeddings -Version 1.0.2
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="elbruno.LocalEmbeddings" Version="1.0.2" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="elbruno.LocalEmbeddings" Version="1.0.2" />
                    
Directory.Packages.props
<PackageReference Include="elbruno.LocalEmbeddings" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add elbruno.LocalEmbeddings --version 1.0.2
                    
#r "nuget: elbruno.LocalEmbeddings, 1.0.2"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package elbruno.LocalEmbeddings@1.0.2
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=elbruno.LocalEmbeddings&version=1.0.2
                    
Install as a Cake Addin
#tool nuget:?package=elbruno.LocalEmbeddings&version=1.0.2
                    
Install as a Cake Tool

LocalEmbeddings

NuGet License: MIT

A .NET library for generating text embeddings locally using ONNX Runtime and Microsoft.Extensions.AI abstractions—no external API calls required.

Features

  • Local Embedding Generation - Run inference entirely on your machine using ONNX Runtime
  • Microsoft.Extensions.AI Integration - Implements IEmbeddingGenerator<string, Embedding<float>> for seamless ecosystem compatibility
  • HuggingFace Model Support - Use popular sentence transformer models from HuggingFace Hub
  • Automatic Model Caching - Models are downloaded once and cached locally for fast subsequent loads
  • Dependency Injection Support - First-class integration with IServiceCollection and the Options pattern
  • Thread-Safe - Concurrent embedding generation from multiple threads
  • Batched Inference - Efficient processing of multiple texts in a single call

Installation

dotnet add package elbruno.LocalEmbeddings

Or via the NuGet Package Manager:

Install-Package elbruno.LocalEmbeddings

Quick Start

Direct Usage

using LocalEmbeddings;
using LocalEmbeddings.Options;

// Create the generator with default settings
var generator = new LocalEmbeddingGenerator(new LocalEmbeddingsOptions());

// Generate a single embedding
var result = await generator.GenerateAsync(["Hello, world!"]);
float[] embedding = result[0].Vector.ToArray();

// Generate multiple embeddings (batched)
var texts = new[] { "First document", "Second document", "Third document" };
var embeddings = await generator.GenerateAsync(texts);

// Don't forget to dispose when done
generator.Dispose();

With Custom Model

var options = new LocalEmbeddingsOptions
{
    ModelName = "sentence-transformers/all-MiniLM-L6-v2",
    MaxSequenceLength = 256
};

using var generator = new LocalEmbeddingGenerator(options);
var embeddings = await generator.GenerateAsync(["Your text here"]);

Configuration Options

The LocalEmbeddingsOptions class provides the following configuration:

Property Type Default Description
ModelName string "sentence-transformers/all-MiniLM-L6-v2" HuggingFace model identifier
ModelPath string? null Path to a local model directory (bypasses download)
CacheDirectory string? null Custom directory for model cache
MaxSequenceLength int 512 Maximum token sequence length
EnsureModelDownloaded bool true Download model on startup if not cached
var options = new LocalEmbeddingsOptions
{
    ModelName = "sentence-transformers/all-MiniLM-L6-v2",
    CacheDirectory = @"C:\models\cache",
    MaxSequenceLength = 256,
    EnsureModelDownloaded = true
};

Dependency Injection

LocalEmbeddings provides multiple overloads of AddLocalEmbeddings() for flexible DI registration:

Basic Registration

using LocalEmbeddings.Extensions;

services.AddLocalEmbeddings();

With Configuration Action

services.AddLocalEmbeddings(options =>
{
    options.ModelName = "sentence-transformers/all-MiniLM-L6-v2";
    options.MaxSequenceLength = 256;
});

With Model Name Only

services.AddLocalEmbeddings("sentence-transformers/all-MiniLM-L6-v2");

With Pre-configured Options

var options = new LocalEmbeddingsOptions
{
    ModelName = "sentence-transformers/all-MiniLM-L6-v2",
    MaxSequenceLength = 256
};
services.AddLocalEmbeddings(options);

From Configuration (appsettings.json)

{
  "LocalEmbeddings": {
    "ModelName": "sentence-transformers/all-MiniLM-L6-v2",
    "MaxSequenceLength": 256,
    "CacheDirectory": "/path/to/cache"
  }
}
services.AddLocalEmbeddings(configuration.GetSection("LocalEmbeddings"));

Injecting the Generator

public class MyService
{
    private readonly IEmbeddingGenerator<string, Embedding<float>> _embeddings;

    public MyService(IEmbeddingGenerator<string, Embedding<float>> embeddings)
    {
        _embeddings = embeddings;
    }

    public async Task<float[]> GetEmbeddingAsync(string text)
    {
        var result = await _embeddings.GenerateAsync([text]);
        return result[0].Vector.ToArray();
    }
}

Supported Models

Default Model

The default model is sentence-transformers/all-MiniLM-L6-v2, which produces 384-dimensional embeddings. It offers an excellent balance of quality and performance.

Custom ONNX Models

You can use any HuggingFace sentence transformer model that has ONNX exports:

// Use a different HuggingFace model
var options = new LocalEmbeddingsOptions
{
    ModelName = "sentence-transformers/paraphrase-MiniLM-L6-v2"
};

Local Model Path

For offline scenarios or custom models, specify a local directory:

var options = new LocalEmbeddingsOptions
{
    ModelPath = @"C:\models\my-custom-model",
    EnsureModelDownloaded = false
};

The model directory must contain:

  • model.onnx - The ONNX model file
  • tokenizer.json or vocab.txt - Tokenizer files

Cache Locations

Models are automatically cached in platform-specific locations:

Platform Cache Directory
Windows %LOCALAPPDATA%\LocalEmbeddings\models
Linux $XDG_DATA_HOME/LocalEmbeddings/models or ~/.local/share/LocalEmbeddings/models
macOS ~/.local/share/LocalEmbeddings/models

Override with the CacheDirectory option:

var options = new LocalEmbeddingsOptions
{
    CacheDirectory = "/custom/cache/path"
};

API Reference

LocalEmbeddingGenerator

The main class for generating embeddings. Implements IEmbeddingGenerator<string, Embedding<float>>.

public sealed class LocalEmbeddingGenerator : IEmbeddingGenerator<string, Embedding<float>>
{
    // Constructor
    public LocalEmbeddingGenerator(LocalEmbeddingsOptions options);
    
    // Properties
    public EmbeddingGeneratorMetadata Metadata { get; }
    
    // Methods
    public Task<GeneratedEmbeddings<Embedding<float>>> GenerateAsync(
        IEnumerable<string> values,
        EmbeddingGenerationOptions? options = null,
        CancellationToken cancellationToken = default);
    
    public TService? GetService<TService>(object? key = null) where TService : class;
    
    public void Dispose();
}

LocalEmbeddingsOptions

Configuration options for the embedding generator.

public sealed class LocalEmbeddingsOptions
{
    public string ModelName { get; set; }
    public string? ModelPath { get; set; }
    public string? CacheDirectory { get; set; }
    public int MaxSequenceLength { get; set; }
    public bool EnsureModelDownloaded { get; set; }
}

ServiceCollectionExtensions

Extension methods for DI registration in LocalEmbeddings.Extensions namespace.

public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddLocalEmbeddings(
        this IServiceCollection services,
        Action<LocalEmbeddingsOptions>? configure = null);
    
    public static IServiceCollection AddLocalEmbeddings(
        this IServiceCollection services,
        LocalEmbeddingsOptions options);
    
    public static IServiceCollection AddLocalEmbeddings(
        this IServiceCollection services,
        string modelName);
    
    public static IServiceCollection AddLocalEmbeddings(
        this IServiceCollection services,
        IConfiguration configuration);
}

Building from Source

Prerequisites

Build

git clone https://github.com/elbruno/elbruno.localembeddings.git
cd elbruno.localembeddings
dotnet build

Run Tests

dotnet test

Requirements

  • .NET 10.0 or later
  • ONNX Runtime compatible platform (Windows, Linux, macOS)

License

This project is licensed under the MIT License - see the LICENSE file for details.

Product Compatible and additional computed target framework versions.
.NET net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (4)

Showing the top 4 NuGet packages that depend on elbruno.LocalEmbeddings:

Package Downloads
ElBruno.LocalEmbeddings.VectorData

Microsoft.Extensions.VectorData integration for ElBruno.LocalEmbeddings

ElBruno.LocalEmbeddings.KernelMemory

Kernel Memory integration for ElBruno.LocalEmbeddings — provides ITextEmbeddingGenerator adapter and KernelMemoryBuilder extensions

ElBruno.LocalEmbeddings.ImageEmbeddings

CLIP-based image embedding generation using ONNX Runtime for local multimodal search and RAG scenarios

ElBruno.ModelContextProtocol.MCPToolRouter

Semantic routing for Model Context Protocol (MCP) tool definitions using local embeddings. Indexes MCP tools and returns the most relevant tools for a given prompt via vector search.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.1.5 443 3/12/2026
1.1.4 389 2/28/2026
1.0.14 150 2/23/2026
1.0.12 234 2/16/2026
1.0.11 129 2/14/2026
1.0.10 133 2/14/2026
1.0.9-preview 124 2/14/2026
1.0.8 96 2/14/2026
1.0.7 90 2/13/2026
1.0.6 98 2/13/2026
1.0.5 101 2/13/2026
1.0.4 99 2/13/2026
1.0.2 103 2/13/2026
1.0.1 95 2/13/2026
1.0.0 107 2/12/2026
0.5.0-preview 93 2/12/2026