AiGeekSquad.AIContext.MEAI
1.0.42
dotnet add package AiGeekSquad.AIContext.MEAI --version 1.0.42
NuGet\Install-Package AiGeekSquad.AIContext.MEAI -Version 1.0.42
<PackageReference Include="AiGeekSquad.AIContext.MEAI" Version="1.0.42" />
<PackageVersion Include="AiGeekSquad.AIContext.MEAI" Version="1.0.42" />
<PackageReference Include="AiGeekSquad.AIContext.MEAI" />
paket add AiGeekSquad.AIContext.MEAI --version 1.0.42
#r "nuget: AiGeekSquad.AIContext.MEAI, 1.0.42"
#:package AiGeekSquad.AIContext.MEAI@1.0.42
#addin nuget:?package=AiGeekSquad.AIContext.MEAI&version=1.0.42
#tool nuget:?package=AiGeekSquad.AIContext.MEAI&version=1.0.42
AiGeekSquad.AIContext.MEAI
A Microsoft Extensions AI Abstractions adapter for the AiGeekSquad.AIContext semantic chunking library.
Overview
This package provides seamless integration between Microsoft's AI abstractions Microsoft.Extensions.AI.Abstractions
and the AiGeekSquad.AIContext library. It allows you to use any embedding generator that implements Microsoft's IEmbeddingGenerator<TInput,TEmbedding>
interface with the AIContext semantic chunking functionality.
Purpose
The MicrosoftExtensionsAIEmbeddingGenerator
class acts as an adapter that:
- Implements the
AiGeekSquad.AIContext.Chunking.IEmbeddingGenerator
interface - Wraps any Microsoft Extensions AI embedding generator
- Converts between Microsoft's
Embedding<float>
format and Math.NET'sVector<double>
format - Enables seamless integration with AIContext's semantic text chunking capabilities
Installation
dotnet add package AiGeekSquad.AIContext.MEAI
Usage
Basic Usage
using AiGeekSquad.AIContext.MEAI;
using AiGeekSquad.AIContext.Chunking;
using Microsoft.Extensions.AI;
// Initialize your Microsoft Extensions AI embedding generator
// This could be OpenAI, Azure OpenAI, or any other provider
IEmbeddingGenerator<string, Embedding<float>> microsoftEmbeddingGenerator =
CreateYourEmbeddingGenerator(); // Your specific implementation
// Wrap it with the adapter
IEmbeddingGenerator aiContextEmbeddingGenerator =
new MicrosoftExtensionsAIEmbeddingGenerator(microsoftEmbeddingGenerator);
// Create additional required components
var tokenCounter = new MLTokenCounter();
var similarityCalculator = new MathNetSimilarityCalculator();
var textSplitter = new SentenceTextSplitter();
// Use with AIContext semantic chunking
var chunker = new SemanticTextChunker(
embeddingGenerator: aiContextEmbeddingGenerator,
tokenCounter: tokenCounter,
similarityCalculator: similarityCalculator,
textSplitter: textSplitter
);
var text = "Your long document text that needs to be chunked into semantic segments...";
var chunks = await chunker.ChunkTextAsync(text);
// Process the results
foreach (var chunk in chunks)
{
Console.WriteLine($"Chunk ({chunk.Text.Length} chars): {chunk.Text[..Math.Min(50, chunk.Text.Length)]}...");
}
With Dependency Injection
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using AiGeekSquad.AIContext.MEAI;
using AiGeekSquad.AIContext.Chunking;
using Microsoft.Extensions.AI;
var builder = Host.CreateApplicationBuilder(args);
// Register your Microsoft Extensions AI embedding generator
// Example: Register OpenAI embedding generator
builder.Services.AddSingleton<IEmbeddingGenerator<string, Embedding<float>>>(provider =>
{
// Your specific embedding generator implementation
return CreateYourEmbeddingGenerator(); // Replace with actual implementation
});
// Register AIContext dependencies
builder.Services.AddSingleton<ITokenCounter, MLTokenCounter>();
builder.Services.AddSingleton<ISimilarityCalculator, MathNetSimilarityCalculator>();
builder.Services.AddSingleton<ITextSplitter, SentenceTextSplitter>();
// Register the adapter
builder.Services.AddSingleton<IEmbeddingGenerator, MicrosoftExtensionsAIEmbeddingGenerator>();
// Register semantic chunker with all dependencies
builder.Services.AddSingleton<SemanticTextChunker>();
var app = builder.Build();
// Use the chunker
var chunker = app.Services.GetRequiredService<SemanticTextChunker>();
var chunks = await chunker.ChunkTextAsync("Your document text...");
Advanced Example with Custom Configuration
using AiGeekSquad.AIContext.MEAI;
using AiGeekSquad.AIContext.Chunking;
using Microsoft.Extensions.AI;
// Initialize your Microsoft Extensions AI embedding generator
IEmbeddingGenerator<string, Embedding<float>> microsoftGenerator =
CreateYourEmbeddingGenerator(); // Your implementation
// Create the adapter
var embeddingGenerator = new MicrosoftExtensionsAIEmbeddingGenerator(microsoftGenerator);
// Configure chunking options for optimal performance
var chunkOptions = new ChunkOptions
{
MaxChunkSize = 1000, // Maximum tokens per chunk
OverlapSize = 100, // Overlap between chunks
SimilarityThreshold = 0.75, // Semantic similarity threshold
MinChunkSize = 50 // Minimum viable chunk size
};
// Create required components
var tokenCounter = new MLTokenCounter();
var similarityCalculator = new MathNetSimilarityCalculator();
var textSplitter = new SentenceTextSplitter();
// Create semantic chunker with all dependencies
var chunker = new SemanticTextChunker(
embeddingGenerator: embeddingGenerator,
tokenCounter: tokenCounter,
similarityCalculator: similarityCalculator,
textSplitter: textSplitter,
options: chunkOptions
);
// Process a long document
var text = @"Your long document text here. This could be a research paper,
technical documentation, or any lengthy content that needs to be
semantically chunked for better processing...";
var chunks = await chunker.ChunkTextAsync(text);
// Display results with detailed information
Console.WriteLine($"Document chunked into {chunks.Count} semantic segments:");
for (int i = 0; i < chunks.Count; i++)
{
var chunk = chunks[i];
Console.WriteLine($"\n--- Chunk {i + 1} ---");
Console.WriteLine($"Length: {chunk.Text.Length} characters");
Console.WriteLine($"Text: {chunk.Text[..Math.Min(100, chunk.Text.Length)]}...");
Console.WriteLine($"Embedding dimensions: {chunk.Embedding.Count}");
Console.WriteLine($"First 5 embedding values: [{string.Join(", ", chunk.Embedding.Take(5).Select(v => v.ToString("F4")))}...]");
}
Integration Benefits
By using this adapter, you gain several key advantages:
- Leverage Microsoft's AI Ecosystem: Use any embedding generator that follows Microsoft's AI abstractions, including OpenAI, Azure OpenAI, and other providers
- Maintain Compatibility: Keep your existing AIContext semantic chunking code unchanged while upgrading your embedding provider
- Future-Proof Architecture: Benefit from updates to both Microsoft's AI abstractions and AIContext libraries without breaking changes
- Optimized Performance: Take advantage of Microsoft's optimized embedding implementations and async patterns
- Provider Flexibility: Switch between different embedding providers without changing your chunking logic
- Type Safety: Enjoy full compile-time type checking and IntelliSense support
Supported Operations
The adapter supports both single and batch embedding generation with full async support:
Single Embedding Generation
// Generate embedding for a single text
var embedding = await embeddingGenerator.GenerateEmbeddingAsync("Your text here");
Batch Embedding Generation
// Generate embeddings for multiple texts efficiently
var texts = new[] { "First text", "Second text", "Third text" };
var embeddings = await embeddingGenerator.GenerateBatchEmbeddingsAsync(texts);
Both methods automatically convert from Microsoft's Embedding<float>
format to Math.NET's Vector<double>
format as required by the AIContext library.
Error Handling
The adapter provides comprehensive error handling and validation:
- Null Argument Validation: Throws
ArgumentNullException
for null inputs with descriptive parameter names - Operation Failures: Wraps underlying exceptions in
InvalidOperationException
with detailed error messages - Embedding Validation: Ensures embedding vectors are valid and non-empty before conversion
- Graceful Degradation: Handles provider-specific errors and provides meaningful feedback
Threading and Cancellation
The adapter is designed for high-performance async operations:
- Full Async Support: All methods are properly async with
ConfigureAwait(false)
for optimal performance - Cancellation Token Support: Supports
CancellationToken
for long-running operations and graceful shutdown - Thread Safety: Safe for concurrent use across multiple threads (inherits thread safety characteristics from the underlying generator)
- Resource Management: Properly manages resources and disposes of them when appropriate
Dependencies
This package has the following dependencies:
- Microsoft.Extensions.AI.Abstractions (>= 9.7.0) - Core AI abstractions from Microsoft
- AiGeekSquad.AIContext - Main semantic chunking library (included as project reference)
- MathNet.Numerics - Mathematical operations and vector handling (transitively via AIContext)
Performance Considerations
- Batch Processing: Use
GenerateBatchEmbeddingsAsync()
for multiple texts to leverage provider optimizations - Memory Efficiency: The adapter minimizes memory allocations during vector conversions
- Async Patterns: Designed to work efficiently with async/await patterns and high-concurrency scenarios
Contributing
This package is part of the AiGeekSquad.AIContext project. We welcome contributions!
- Issues: Report bugs or request features in the main repository
- Pull Requests: Submit improvements following the project's coding standards
- Documentation: Help improve documentation and examples
License
This project is licensed under the MIT License - see the main project repository for full license details.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
.NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.1 is compatible. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.1
- AiGeekSquad.AIContext (>= 1.0.42)
- Microsoft.Extensions.AI.Abstractions (>= 9.8.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last Updated |
---|---|---|
1.0.42 | 11 | 8/19/2025 |
1.0.41 | 90 | 8/14/2025 |
1.0.39 | 90 | 8/14/2025 |
1.0.38 | 93 | 8/14/2025 |
1.0.37 | 91 | 8/14/2025 |
1.0.35 | 195 | 8/6/2025 |
1.0.33 | 185 | 8/4/2025 |
1.0.32 | 108 | 7/29/2025 |
1.0.31 | 109 | 7/29/2025 |
1.0.30 | 108 | 7/29/2025 |
1.0.27 | 107 | 7/29/2025 |
1.0.26 | 478 | 7/22/2025 |
1.0.25 | 477 | 7/22/2025 |
1.0.24 | 473 | 7/22/2025 |
1.0.21 | 476 | 7/22/2025 |
1.0.20 | 476 | 7/21/2025 |
1.0.19 | 64 | 7/11/2025 |
1.0.18 | 58 | 7/11/2025 |
1.0.17 | 71 | 7/11/2025 |
1.0.16 | 70 | 7/11/2025 |
1.0.15 | 74 | 7/11/2025 |