Vivet.AI
0.8.0-preview
dotnet add package Vivet.AI --version 0.8.0-preview
NuGet\Install-Package Vivet.AI -Version 0.8.0-preview
<PackageReference Include="Vivet.AI" Version="0.8.0-preview" />
<PackageVersion Include="Vivet.AI" Version="0.8.0-preview" />
<PackageReference Include="Vivet.AI" />
paket add Vivet.AI --version 0.8.0-preview
#r "nuget: Vivet.AI, 0.8.0-preview"
#:package Vivet.AI@0.8.0-preview
#addin nuget:?package=Vivet.AI&version=0.8.0-preview&prerelease
#tool nuget:?package=Vivet.AI&version=0.8.0-preview&prerelease
Vivet.AI
Unlock the full power of AI in your .NET applications with a comprehensive library for chat, embeddings, memory, knowledge, metadata, and summarization. Instantly enrich conversations and documents with context, structured metadata, and insights, including real-time streaming, multimodal content (images, audio, video), advanced text chunking, and context deduplication. Track usage, override configurations on the fly, and plug in custom implementations with ease. Build smarter, faster, and context-aware AI experiences with minimal boilerplate.
The library supports all major orchestration frameworks and a variety of vector stores for memory and knowledge management. Every service follows a request/response pattern, includes token and performance tracking, and allows per-request configuration overrides.
Table of Contents
ποΈ Orchestrations
πΉ OpenAI
πΉ Azure OpenAI
πΉ Azure AI Inference
πΉ HuggingFace
πΉ Ollama
πΉ Google Gemini
πΉ Amazon Bedrock
ποΈ Vector Stores
πΉ Qdrant
πΉ Pinecone
πΉ Weaviate
πΉ Postgres (pgvector)
πΉ Azure AI Search
β¨ Services
π¨οΈ Chat
π§© Embedding
π§ Memory
π Knowledge
ποΈ Metadata Service
βοΈ Summarization Service
β‘ Core Service Concepts
π© Request/Response Pattern
βοΈ Request Configuration Overrides
β Error Handling
π° Token & Performance Tracking
π οΈ Extensible Implementations
π Health Checks
π Observability
π‘ Other Highlighted Features
π Advanced Text Chunking
π§Ή Context Deduplication
π Appendix
π Licensing
βοΈ Complete Configuration
<br /><br /><br />
ποΈ Orchestrations
The library provides a unified orchestration layer across multiple AI providers, allowing you to integrate, configure, and switch between them with minimal effort.
Instead of writing provider-specific code, you work against a consistent abstraction that keeps your business logic clean and portable.
This makes it easy to:
- Swap between providers (e.g., OpenAI β Azure OpenAI) without refactoring.
- Experiment with different backends to optimize cost, performance, or capability.
- Standardize advanced features like chat parameters, streaming, and error handling across all orchestrations.
The following sections describe each supported orchestration in detail, including how to register it and which chat model parameters are available.
βοΈ Configuration
Orchestrations are configured under the top-level "Ai"
section in your appsettings.json
, as shown below.
{
"Ai": {
"Endpoint": null,
"ApiKey": null,
"ApiKeyId": null,
"Chat": { },
"Embedding": { },
"Metadata": { },
"Summarization": { }
}
}
π Configuration Details
This is main appsettings
configuration.
The configuration of Chat
, Embedding
, Metadata
and Summarization
is detailed under their respective sections.
Setting | Type | Default | Description |
---|---|---|---|
Endpoint |
string | null | The endpoint (or AWS region) of the AI provider. Can be null if not required. |
ApiKey |
string | null | The API key of the AI provider. Can be null if not required. |
ApiKeyId |
string | null | The API key identifier, depending on the provider. Can be null if not required. |
Chat |
See Chat Configuration. | ||
Embedding |
See Embedding Configuration. | ||
Metadata |
See Metadata Configuration. | ||
Summarization |
See Summarization Configuration. |
The table below shows the required configuration values (Endpoint
, ApiKey
, and ApiKeyId
) for each supported orchestration provider.
This helps you quickly identify which settings need to be provided for each backend before integrating it into your application.
Use this as a reference when setting up your Ai
section in appsettings.json
.
Setting | OpenAI | Azure OpenAI | Azure InferenceAI | HuggingFace | Ollama | Google Gemini | Amazon Bedrock |
---|---|---|---|---|---|---|---|
Endpoint |
β | β | β | β | β | β | βΉοΈ |
ApiKey |
β | β | β | β | β | β | β |
ApiKeyId |
β | β | β | β | β | β | β |
βΉοΈ Consult the individual provider sections below for details on support and usage of the configuration values.
π οΈ Supported Chat Model Parameters
Chat models are used across multiple services and can be configured individually.
The table summarizes parameter support for each provider.
Chat Model Parameter | OpenAI | Azure OpenAI | Azure AI Inference | HuggingFace | Ollama | Google Gemini | Amazon Bedrock |
---|---|---|---|---|---|---|---|
MaxOutputTokens |
β | β | β | β | β | β | β |
Temperature |
β | β | β | β | β | β | β |
StopSequences |
β | β | β | β | β | β | β |
Seed |
β | β | β | β | β | β | βΉοΈ |
PresencePenalty |
β | β | β | β | β | β | βΉοΈ |
FrequencyPenalty |
β | β | β | β | β | β | βΉοΈ |
RepetitionPenalty |
β | β | β | β | β | β | β |
TopP |
β | β | β | β | β | β | β |
TopK |
β | β | β | β | β | β | βΉοΈ |
ReasoningEffort |
β | β | β | β | β | β | β |
βΉοΈ Consult the individual provider sections below for details on support for chat model parameters.
πΉ OpenAI
OpenAI provides access to the GPT-family models.
Register using appsettings.json
services
.AddVivetOpenAi();
Register using inline configuration
services
.AddVivetOpenAi(options =>
{
options.ApiKey = "<your-api-key>";
options.Endpoint = "<your-endpoint>";
// Configure additional options for chat, embedding, etc
});
πΉ Azure OpenAI
Azure OpenAI provides access to the GPT-family models through a secure, enterprise-ready platform on Azure.
Register using appsettings.json
services
.AddVivetAzureOpenAi();
Register using inline configuration
services
.AddVivetAzureOpenAi(options =>
{
options.ApiKey = "<your-api-key>";
options.Endpoint = "<your-endpoint>";
// Configure additional options for chat, embedding, etc
});
πΉ Azure AI Inference
Azure AI Inference allows inference on various LLMs via Azure endpoints with enterprise features.
Register using appsettings.json
services
.AddVivetAzureAIInference();
Register using inline configuration
services
.AddVivetAzureAIInference(options =>
{
options.ApiKey = "<your-api-key>";
options.Endpoint = "<your-endpoint>";
// Configure additional options for chat, embedding, etc
});
πΉ HuggingFace
HuggingFace models can be used directly via this library for custom inference workflows.
Register using appsettings.json
services
.AddVivetHuggingFace();
Register using inline configuration
services
.AddVivetHuggingFace(options =>
{
options.ApiKey = "<your-api-key>";
options.Endpoint = "<your-endpoint>";
// Configure additional options for chat, embedding, etc
});
πΉ Ollama
Ollama provides local model inference and supports temperature-based sampling.
Register using appsettings.json
services
.AddVivetHOllama();
Register using inline configuration
services
.AddVivetHOllama(options =>
{
options.Endpoint = "<your-host>";
// Configure additional options for chat, embedding, etc
});
πΉ Google Gemini
Google Gemini allows structured and generative responses via its LLM APIs.
Register using appsettings.json
services
.AddVivetGoogleGemini();
Register using inline configuration
services
.AddVivetGoogleGemini(options =>
{
options.ApiKey = "<your-api-key>";
// Configure additional options for chat, embedding, etc
});
πΉ Amazon Bedrock
Amazon Bedrock supports multiple models: Claude, Cohere Command, Cohere Command-R, AI21 Labs Jamba/Jurassic, Mistral, Titan, Llama.
Register using appsettings.json
services
.AddVivetAmazonBedrock();
Register using inline configuration
services
.AddVivetAmazonBedrock(options =>
{
options.Endpoint = "<your-aws-region>";
options.ApiKey = "<your-access-key>";
options.ApiKeyId = "<your-secret-key>";
// Configure additional options for chat, embedding, etc
});
βΉοΈ Specify your AWS region as the Endpoint. Amazon Bedrock maps it internally instead of using a full endpoint.
Amazon Bedrock Model-Specific Chat Model Parameters
Different Amazon Bedrock models support different sets of chat parameters. The table summarizes parameter support across the available models.
Parameter | Claude | Cohere Command | Cohere Command-R | AI21 Jamba | AI21 Jurassic | Mistral | Titan | Llama3 |
---|---|---|---|---|---|---|---|---|
MaxOutputTokens |
β | β | β | β | β | β | β | β |
Temperature |
β | β | β | β | β | β | β | β |
StopSequences |
β | β | β | β | β | β | β | β |
Seed |
β | β | β | β | β | β | β | β |
PresencePenalty |
β | β | β | β | β | β | β | β |
FrequencyPenalty |
β | β | β | β | β | β | β | β |
RepetitionPenalty |
β | β | β | β | β | β | β | β |
TopP |
β | β | β | β | β | β | β | β |
TopK |
β | β | β | β | β | β | β | β |
ReasoningEffort |
β | β | β | β | β | β | β | β |
<br /><br />
ποΈ Vector Stores
Vector stores are specialized databases designed for storing and searching embeddings.
In this library, they are used with the Embedding Memory and Embedding Knowledge services to enable semantic search and context retrieval.
πΉ Qdrant
Qdrant ‴ is a high-performance open-source vector database optimized for semantic search and recommendation systems.
Start with Docker
docker run -p 6333:6333 -p 6334:6334 `
-v qdrant_storage:/qdrant/storage `
-e QDRANT__SERVICE__API_KEY=secret `
qdrant/qdrant
Dashboard:
http://localhost:6333/dashboard ‴ <br /><br />
πΉ Pinecone
Pinecone ‴ is a fully managed, cloud-native vector database with focus on scalability and production-readiness. It does not run locally with Docker; you must create an account and use the hosted API.
Access
https://app.pinecone.io ‴ <br /><br />
πΉ Weaviate
Weaviate ‴ is an open-source vector search engine with a strong plugin ecosystem and GraphQL-based API.
Start with Docker
docker run -p 8080:8080 `
-e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true `
semitechnologies/weaviate
Dashboard / API Explorer
http://localhost:8080 ‴ <br /><br />
πΉ Postgres (pgvector)
pgvector ‴ is a PostgreSQL extension that adds vector similarity search, combining the reliability of Postgres with embedding capabilities.
Start with Docker
docker run -p 5432:5432 `
-e POSTGRES_PASSWORD=secret `
ankane/pgvector
Admin UI
You can connect with any Postgres client or use pgAdmin http://localhost:5050 ‴ <br /><br />
πΉ Azure AI Search
Azure AI Search ‴ (formerly Cognitive Search) supports hybrid search with both text and vector embeddings, fully managed on Azure.
Access
Provision an Azure AI Search resource in the Azure portal.
Dashboard:
https://portal.azure.com ‴ <br /><br />
β¨ Services
The library provides a rich set of services including Chat, Embedding, Embedding Memory, Embedding Knowledge, Metadata, and Summarization. Each service is designed to be modular, configurable, and optimized for advanced AI workflows. They can be used independently or combined to build powerful orchestration pipelines. New services and AI model integrations are continuously being added to expand functionality of the library and keep pace with the AI ecosystem.
Detailed explanations and usage examples for each service are provided in the following sections. <br /><br />
π¨οΈ Chat Service
The IChatService
combines LLMs, memory, knowledge bases, and multimodal context into a single conversational API. It supports plain text and typed JSON responses, real-time streaming, and automatic memory + knowledge enrichment. Developers can attach blobs (documents, images, audio, video), and the service automatically extracts summary and description metadata to ground the conversation. With built-in support for reasoning transparency, token usage tracking, and automatic memory indexing, ChatService
provides everything needed to build intelligent, context-aware chat applications on .NET.
Methods
ChatAsync
returns a plain string answer plus metadata (reasoning, thinking trace, token usage, raw output, elapsed execution time, and reconstructed input prompt).ChatAsync<T>
supports typed responses, where the LLM is instructed in the prompt to return JSON matching the specified type. The service automatically deserializes that JSON into your .NET type.
β οΈ Note: The model will automatically output JSON that matches the typeT
in the response. No need to manually add the JSON schema to the system message or question of the chat request.ChatStreamingAsync
allows real-time streaming of the modelβs output, returning content token-by-token (or chunk-by-chunk) as it is generated. At the end of the stream, the service automatically saves the conversation to memory and optionally invokes a completion callback. Supports the same features asChatAsync
.
Memory & Knowledge Integration (Plugin)
- Through optional built-in plugins, requests can be enriched with long-term memories and knowledge entries retrieved using approximate nearest neighbor (ANN) search for efficient similarity matching.
- Both memory and knowledge support multi-dimensional segmentation to scope retrieval:
- Memory segmentation:
ScopeId
,UserId
,AgentId
, andThreadId
ensure the most relevant user- and thread-specific context is used. - Knowledge segmentation:
ScopeId
,TenantId
, andSubTenantId
allow fine-grained retrieval from organizational knowledge bases.
- Memory segmentation:
- Built-in deduplication ensures only the most relevant and unique context is injected into the prompt.
- Thread-awareness boosts relevance by prioritizing memories from the active conversation.
- The chat model determines if and when to include memory and knowledge in the context, based on the userβs query.
Web Search (Plugin)
- Enables the chat model to perform external web searches through a configurable provider (Google, Bing, etc.).
- Web search is used when additional or updated context is required that is not available in the model's training data or memory.
Blob Metadata Enrichment
- You can attach blobs (e.g., PDFs, images, videos, audio files) to a
ChatRequest
. - The service automatically extracts and indexes summary and description metadata, making it available to the model as part of the prompt without preprocessing. This requires metadata processing to be enabled and configured in
appsettings
; otherwise, metadata must be passed alongside the blob in theChatRequest
.
Reasoning Transparency
When supported by the provider (e.g., DeepSeek R1), the service exposes:
- Reasoning: a concise explanation of why an answer was provided.
- Thinking: a detailed breakdown of the modelβs step-by-step thought process.
Automatic Asynchronous Memory Indexing
- Questions and answers are persisted to memory using the
IEmbeddingMemoryService
(if memory embedding is configured inappsettings
). - Optional callbacks (
onMemoryIndexed
) allow you to hook into the lifecycle for logging or analytics.
Custom Plugins
Custom plugins extend the chat model with your own functionality. They can be added in two ways:
- Configuration (global) β Registered in
appsettings.json
. Always available to the chat model. - Per request (scoped) β Passed with a specific
ChatRequest
, giving fine-grained control. The caller is responsible for instantiating and wiring up dependencies.
You can combine both approaches β for example, register global plugins for core features and add request-specific plugins for special scenarios.
When plugins are available, the chat model automatically decides whether to invoke them based on the userβs query. This is by design β the model plans and decides when and how to use plugins.
- For custom plugins, if you require a plugin to always be invoked, call it manually in your application and include its result in the system message of the request.
- Custom plugin parameters should be passed in the
SystemMessage
of theChatRequest
, or derived from existing context in the request (UserId
,TenantId
, etc.).
π More details: Semantic Kernel Plugins (C#)
Filters
Filters in IChatService
act like middleware for your chat pipeline. They allow you to intercept, inspect, modify, or augment requests and responses as they flow through the system.
- Registration: Add filters to your
IServiceCollection
in the order you want them to execute. The service will transfer them to the Kernel in the same order, ensuring predictable execution. - Use cases:
- Logging: Capture request and response data for auditing or analytics.
- Validation: Ensure inputs meet specific criteria before being sent to the LLM.
- Enrichment: Automatically inject context, metadata, or additional prompts into requests.
This design allows you to customize the chat workflow, apply cross-cutting concerns, and extend behavior without modifying core service logic.
π More details: Filters (C#)
βοΈ Chat Configuration
Example appsettings.json
snippet showing how to configure IChatService
under the "Ai"
section:
"Ai": {
"Chat": {
"Model": {
"Name": "<your-chat-model>",
"UseHealthCheck": true,
"Parameters": {
"MaxOutputTokens": 2048,
"Temperature": null,
"StopSequences": [],
"Seed": null,
"PresencePenalty": null,
"FrequencyPenalty": null,
"RepetitionPenalty": null,
"TopP": null,
"TopK": null,
"ReasoningEffort": null
}
},
"Timeout": "00:01:00",
"Plugins": {
"CustomPlugins": [
],
"BuiltInPlugins": { }
}
}
}
π Chat Configuration Details
Setting | Type | Default | Description |
---|---|---|---|
Chat | Chat configuration. | ||
Chat.Model | The chat model configuration. | ||
Chat.Model.Name | string | null | Specifies the chat model to use (e.g., GPT-4.1). Must be configured in the chosen AI provider. The configured model may be overridden for individual requests. |
Chat.Model.UseHealthCheck | bool | true | Whether to perform a health check on the model before use. |
Chat.Model.Parameters | The chat model parameters. | ||
Chat.Model.Parameters.MaxOutputTokens | int | 2048 | Maximum number of output tokens to generate. |
Chat.Model.Parameters.Temperature | float? | null | Sampling temperature (0β1), controlling randomness. |
Chat.Model.Parameters.StopSequences | string[] | [] | Text sequences that will stop generation. |
Chat.Model.Parameters.Seed | long? | null | Optional seed for deterministic output. |
Chat.Model.Parameters.PresencePenalty | float? | null | Penalty for generating tokens already present in the text. |
Chat.Model.Parameters.FrequencyPenalty | float? | null | Penalty for generating tokens repeatedly. |
Chat.Model.Parameters.RepetitionPenalty | float? | null | Penalizes repeated token usage within the generation. |
Chat.Model.Parameters.TopP | float? | null | Nucleus sampling probability mass. |
Chat.Model.Parameters.TopK | int? | null | Limits candidate tokens considered per generation step. |
Chat.Model.Parameters.ReasoningEffort | ReasoningEffort? | null | Effort level to reduce reasoning complexity or token usage. |
Chat.Timeout | TimeSpan | 00:01:00 | Maximum time allowed for a chat request. |
Chat.Plugins | Options for configuring chat plugins. Plugins (also called tools) are sets of related functions that can be exposed to a chat model. They allow the model to integrate with external services and invoke custom functionality. | ||
Chat.Plugins.CustomPlugins | string[] | [] | Fully qualified type name ("{namespace}.{name}, {assembly}" ). Plugins configured here are always included in chat requests and cannot be disabled. For optional usage, register them per request. |
Chat.Plugins.BuiltInPlugins | Built-in Plugins that can be enabled for the chat model. To disable a plugin, simply omit it's configuration section. See configuration below |
π Chat Built-in Plugin Configuration
π§ Memory
"BuiltInPlugins": {
"Memory": {
"RetentionInDays": 180,
"ContextQueryLimit": 3,
"CounterpartContextQueryLimit": 2,
"UseQueryDeduplication": true,
"DeduplicationMatchScoreThreshold": 0.90
}
}
Setting | Type | Default | Description |
---|---|---|---|
BuiltInPlugins.Memory | Chat memory configuration. Requires Embedding Memory to be configured. | ||
BuiltInPlugins.Memory.RetentionInDays | int | 180 | How far back memories will be included in queries. |
BuiltInPlugins.Memory.ContextQueryLimit | int | 3 | Maximum number of memory entries retrieved per query. |
BuiltInPlugins.Memory.CounterpartContextQueryLimit | int | 2 | Maximum number of counterpart (Q/A pair) entries retrieved. |
BuiltInPlugins.Memory.UseQueryDeduplication | bool | true | Deduplicate similar memory entries before building context. |
BuiltInPlugins.Memory.DeduplicationMatchScoreThreshold | double | 0.90 | Fuzzy similarity threshold for deduplication. |
π Knowledge
"BuiltInPlugins": {
"Knowledge": {
"ContextQueryLimit": 3,
"UseQueryDeduplication": true,
"DeduplicationMatchScoreThreshold": 0.90
}
}
Setting | Type | Default | Description |
---|---|---|---|
BuiltInPlugins.Knowledge | Chat knowledge configuration. Requires Embedding Knowledge to be configured. | ||
BuiltInPlugins.Knowledge.ContextQueryLimit | int | 3 | Maximum number of knowledge entries retrieved per query. |
BuiltInPlugins.Knowledge.UseQueryDeduplication | bool | true | Deduplicate similar knowledge entries before building context. |
BuiltInPlugins.Knowledge.DeduplicationMatchScoreThreshold | double | 0.90 | Fuzzy similarity threshold for knowledge deduplication. |
π Web Search
"BuiltInPlugins": {
"WebSearch": {
"Provider": "Google",
"Id": null,
"ApiKey": null,
"Limit": 5
}
}
Setting | Type | Default | Description |
---|---|---|---|
BuiltInPlugins.WebSearch | null | Web search plugin. Dafault null, not enabled. | |
BuiltInPlugins.WebSearch.Provider | WebSearchProvider | Google |
The provider for the plugin to use when searching the web. |
BuiltInPlugins.WebSearch.Id | string | null | The identifier used for web search. Only used by some providers. |
BuiltInPlugins.WebSearch.ApiKey | string | null | The api-key of the web search provider. |
BuiltInPlugins.WebSearch.Limit | int | null | Number of search results to return for the web search. |
The table below shows the supported providers and their required configuration values (Id
, ApiKey
):
Setting | Bing | |
---|---|---|
Id |
β
(Search Engine ID ) |
β |
ApiKey |
β | β |
π Example Usage
Resolve the service from DI
var chatService = serviceProvider.GetService<IChatService>();
Chat request with explicit blob metadata
var request = new ChatRequest
{
Question = "Summarize the attached document in 3 bullet points.",
UserId = "user-id",
CurrentThreadId = "thread-id",
Blobs =
[
new ImageBlob
{
Data = new BlobDataBase64 { Base64 = "base64" }, // or File, Uri, Stream, etc.
MimeType = ImageMimeType.Png,
Metadata = new Metadata // If Metadata is null, it will be fetched from the blob when configured in appsettings
{
Title = "Quarterly Report Graph",
Description = "Q2 financial summary graph"
}
}
],
// optional: SystemMessage, TenantId, SubTenantId, ScopeId, AgentId, Language, Config Overrides, etc.
};
var onMemoryIndexedTask = new TaskCompletionSource<bool>();
var response = await chatService
.ChatAsync(request, memoryResponse =>
{
try
{
// Handle callback.
onMemoryIndexedTask.SetResult(true);
}
catch (Exception ex)
{
onMemoryIndexedTask.SetException(ex);
}
return Task.CompletedTask;
});
Console.WriteLine($"Answer: {response.Answer}");
Console.WriteLine($"Reasoning: {response.Reasoning}");
await onMemoryIndexedTask.Task;
Typed response (question must instruct model to output valid JSON matching the type)
public class WeatherForecast
{
public string Location { get; set; }
public string Condition { get; set; }
public int TemperatureC { get; set; }
}
var typedRequest = new ChatRequest
{
Question = """
Provide a weather forecast as JSON matching this schema:
{ "Location": string, "Condition": string, "TemperatureC": int }
""",
UserId = "user-id",
CurrentThreadId = "thread-id",
};
var typedResponse = await chatService
.ChatAsync<WeatherForecast>(typedRequest);
Console.WriteLine($"{typedResponse.Answer.Location}: {typedResponse.Answer.Condition}, {typedResponse.Answer.TemperatureC}");
Streaming request and response
await foreach (var chunk in chatService
.ChatStreamingAsync(request, memoryResponse => { /* Handle memory indexed callback */ }, chatResponse => { /* Handle chat completed callback */ }))
{
Console.Write(chunk);
}
<br /><br />
π§© Embedding
The Embedding configuration contains settings shared by both Memory and Knowledge, including the embedding model, vector size, match score threshold, and timeout. Memory and Knowledge also defines configuration thatn are specific to each respectively, and are documented separately below.
βοΈ Configuration
"Ai": {
"Embedding": {
"Model": {
"Name": "<your-embedding-model>",
"UseHealthCheck": true
},
"VectorSize": 1536,
"MatchScoreThreashold": 0.86,
"Timeout": "00:01:00",
"Memory": {
"TextChunking": { },
"Scoring": { },
"VectorStore": { }
},
"Knowledge": {
"TextChunking": { },
"Scoring": { },
"VectorStore": { }
}
}
}
π Common Embedding Configuration Details
Setting | Type | Default | Description |
---|---|---|---|
Model |
Embedding model configuration. | ||
Model.Name |
string | null |
Name of the embedding model (must be supported by the chosen AI provider). The configured model may be overridden for individual requests, Use with caution, as different models generate embeddings differently, which may lead to misalignment with existing embeddings. |
Model.UseHealthCheck |
bool | true |
Whether to validate the embedding model on startup. |
VectorSize |
int | 1536 |
Embedding dimension size. Depends entirely on the model used. |
MatchScoreThreashold |
float | 0.86 |
Cosine similarity threshold for semantic matches (see below for recommended ranges). |
Timeout |
TimeSpan | 00:01:00 |
Timeout for embedding operations. |
Recommended Match Score Thresholds
- 0.00 - 0.70: Often noise, unless domain is very narrow.
- 0.70 - 0.80: Related but not identical (looser recall, brainstorming).
- 0.80 - 0.85: Good semantic match (typical retrieval threshold).
- 0.90+: Very strong / near-duplicate matches.
π Text Chunking
Defines how documents are split into smaller segments before embedding.
"TextChunking": {
"MinTokens": 20,
"MaxTokens": 60,
"NeighborContext": {
"ContextWindow": 1,
"RestrictToSameParagraph": true
}
}
β οΈ Note: Read more about Text Chunking
Setting | Type | Default | Description |
---|---|---|---|
MinTokens |
int | 20 |
Minimum number of tokens per chunk. (Approximation) |
MaxTokens |
int | 60 |
Maximum number of tokens per chunk. Sentences are merged until this limit is reached. (Approximation) |
NeighborContext |
Neighbor context configuration. | ||
NeighborContext.ContextWindow |
int | 1 |
How many chunks before/after are stored as contextual neighbors. |
NeighborContext.RestrictToSameParagraph |
bool | true |
Whether neighbors must belong to the same paragraph. |
π Match Scoring
Defines the weight configuration for approximate nearest neighbor search (ANN) ranking.
"Scoring": {
"RecencyDecayStrategy": "Linear",
"RecencyBoostMax": 0.1,
"RecencyDecayDays": 30,
"RecencySigmoidSteepness": 1.0
}
Setting | Type | Default | Description |
---|---|---|---|
RecencyDecayStrategy |
enum | Linear |
How recency scores decay over time (Linear , Exponential , Sigmoid ). |
RecencyBoostMax |
double | 0.1 |
Max boost applied to the newest entries. |
RecencyDecayDays |
double | 30 |
Days until recency boost becomes negligible. |
RecencySigmoidSteepness |
double | 1.0 |
Steepness of the curve (only used for Sigmoid). |
ποΈ Vector Store
Defines which vector database to use for embedding storage and retrieval.
"VectorStore": {
"Provider": "None",
"Host": "localhost",
"Port": 6334,
"Username": null,
"ApiKey": null,
"Timeout": "00:00:30",
"UseHealthCheck": true
}
Setting | Type | Default | Description |
---|---|---|---|
Provider |
enum | None |
Vector DB provider (Qdrant , Pinecone , etc.). See Supported Vector Stores |
Host |
string | localhost |
Vector DB host. |
Port |
int | 6334 |
Vector DB port. |
Username |
string | null |
Optional username. Used by some providers |
ApiKey |
string | null |
Required if authentication is enabled. |
Timeout |
TimeSpan | 00:00:30 |
Query timeout. |
UseHealthCheck |
bool | true |
Whether to check connectivity on startup. |
<br /><br />
π§ Embedding Memory Service
The IEmbeddingMemoryService
provides semantic memory storage and retrieval built on embeddings.
It allows you to persist question-answer pairs, blobs, and metadata as vectorized memories, and later recall them using semantic search, filters, and contextual scoring.
Indexing
IndexAsync<T>
stores question/answer pairs (structured or unstructured), optional blobs, and metadata.- Supports automatic summarization (via
ISummarizationService
) to reduce verbosity and improve retrieval quality. - Splits text into chunks, generates embeddings, and links related question/answer contexts for richer semantic connections.
- Automatically attaches blob metadata β either provided explicitly or auto-retrieved by
IMetadataService
.
Semantic Search
SearchAsync
retrieves the most relevant memories using vector similarity.- Enhances retrieval with recency scoring and same-thread boosting, so newer or contextually relevant memories are prioritized.
- Supports advanced filtering through
MemoryCriteria
.
Querying
QueryAsync
retrieves memories based on structured criteria (user, agent, thread, question/answer flags, date ranges).- Provides pagination support with
Limit
andSkip
. - Returns raw memory entries with their content, context, and size (in bytes).
Deletion
DeleteAsync
removes memories by ID(s) from the vector store.- Ensures full control over memory lifecycle.
βοΈ Embedding Memory Configuration
The Embedding Memory configuration contains settings specific to memory handling that are not shared with Knowledge.
All TextChunking, Scoring, and VectorStore options are already documented in the Common Embedding Configuration section.
The unique memory-specific setting is:
"Memory": {
"UseExtendedMemoryContext": true,
"UseAutomaticSummarization": false,
"UseAutomaticMetadataRetrieval": true,
"SummarizationDegree": 0,
"TextChunking": { },
"Scoring": {
"ThreadMatchBoost": 0.2
}
"VectorStore": { }
}
π Embedding Memory Configuration Details
Setting | Type | Default | Description |
---|---|---|---|
UseExtendedMemoryContext |
bool | true |
Enables counterpart lookups so the LLM can reference previous answers to similar questions. |
UseAutomaticSummarization |
bool | false |
Enable or disable automatic summarization of memories. |
UseAutomaticMetadataRetrieval |
bool | true |
Automatically retrieve metadata for indexed items. |
TextChunking |
Memory text chunking configuration. Text Chunking Configuration | ||
Scoring |
Memory scoring configuration. Match Scoring Configuration | ||
Scoring.ThreadMatchBoost |
double | 0.2 |
Boosts the score of memories that match the current conversation thread. Only applicable for Memory. |
VectorStore |
Memory vector store configuration. Vector Store Configuration |
π Example Usage
Resolve the service from DI
var embeddingMemoryService = serviceProvider.GetService<IEmbeddingMemoryService>();
Index a memory entry
var indexRequest = new IndexMemoryRequest<string>
{
ThreadId = "thread-id",
UserId = "user-id",
Question = "What is the customer's preferred communication channel?",
Answer = "Email",
Blobs = new BaseBlobMetadata[] { } // optional
// optional: Language, Config Overrides, etc.
};
var indexResponse = await embeddingMemoryService
.IndexAsync(indexRequest);
Console.WriteLine($"Indexed embeddings: {indexResponse.TotalEmbeddings}");
Console.WriteLine($"Indexed embeddings size: {indexResponse.TotalEmbeddingsSize}");
Index typed a memory entry (json embedding)
public class Customer
{
public string Name { get; set; }
public string Email { get; set; }
public string PreferredChannel { get; set; }
}
var indexRequest = new IndexMemoryRequest<Customer>
{
ThreadId = "thread-id",
UserId = "user-id",
Question = "Customer details",
Answer = new Customer { Name = "Alice Johnson", Email = "alice@example.com", PreferredChannel = "Email" },
Blobs = new BaseBlobMetadata[] { } // optional
// optional: Language, Config Overrides, etc.
};
var indexResponse = await embeddingMemoryService
.IndexAsync(indexRequest);
Console.WriteLine($"Indexed embeddings: {indexResponse.TotalEmbeddings}");
Console.WriteLine($"Elapsed time: {indexResponse.ElapsedTime}");
Search for memories based on a query
var searchRequest = new SearchMemoryRequest
{
Query = "Preferred communication channel",
Criteria = new MemoryCriteria
{
UserId = "user-id"
ThreadId = "thread-id",
// additional criteria
},
Limit = 5,
CurrentThreadId = "current-thread" // optional: For boosting results of the current thread.
};
var searchResponse = await embeddingMemoryService
.SearchAsync(searchRequest);
foreach (var result in searchResponse.Results)
{
Console.WriteLine($"Score: {result.Score:0.00} | Text: {result.Result.Content}");
}
Query memories directly with filtering and paging
var queryRequest = new QueryMemoryRequest
{
Criteria = new MemoryCriteria
{
UserId = "user-id"
ThreadId = "thread-id",
// additional criteria
},
Limit = 5,
Skip = 0
}
var queryResponse = await embeddingMemoryService
.QueryAsync(queryRequest);
foreach (var memory in queryResponse.Results)
{
Console.WriteLine($"Text: {memory.Result.Content}");
}
Delete specific memories by ID
var deleteRequest = new DeleteRequest
{
Ids = ["id"]
};
await embeddingMemoryService
.DeleteAsync(deleteRequest);
<br /><br />
π Embedding Knowledge Service
The IEmbeddingKnowledgeService
provides semantic knowledge storage and retrieval built on embeddings.
It allows you to persist structured and unstructured knowledge (text, documents, images, audio, video, blobs, and metadata) into a vector store and later retrieve them using semantic similarity, filters, and contextual scoring.
Indexing
IndexAsync<T>
supports text, documents, images, audio, and video.- Automatically serializes complex objects into JSON before embedding.
- Splits text into chunks, generates embeddings, and attaches neighboring context for richer semantic connections.
- Supports automatic metadata retrieval (via
IMetadataService
) when blob metadata is not provided. - Returns detailed indexing results including total embeddings, size, and token usage.
Semantic Search
SearchAsync
retrieves the most relevant knowledge entries using vector similarity.- Enhances scoring with recency decay so fresher knowledge is prioritized.
- Supports advanced filtering through
KnowledgeCriteria
(tenant, sub-tenant, scope, user, language, tags, and content type).
Querying
QueryAsync
retrieves knowledge entries directly from the vector store using structured filters and ordering.- Does not apply semantic similarity scoring, useful for exact lookups.
- Provides pagination via
Limit
andSkip
. - Returns raw knowledge entries with their content, context, and size (in bytes).
Deletion
DeleteAsync
removes knowledge entries by ID(s) from the vector store.- Ensures full control over knowledge lifecycle.
βοΈ Embedding Knowledge Configuration
The unique knowledge-specific setting is:
"Knowledge": {
"UseAutomaticMetadataRetrieval": true,
"TextChunking": { },
"Scoring": { },
"VectorStore": { }
}
π Embedding Knowledge Configuration Details
Setting | Type | Default | Description |
---|---|---|---|
UseAutomaticMetadataRetrieval |
bool | true |
If enabled, metadata is automatically extracted from documents/blobs (via IMetadataService ) when not explicitly provided. |
TextChunking |
Knowledge text chunking configuration. Text Chunking Configuration | ||
Scoring |
Knowledge scoring configuration. Match Scoring Configuration | ||
VectorStore |
Knowledge vector store configuration. Vector Store Configuration |
π Example Usage
Resolve the service from DI
var embeddingKnowledgeService = serviceProvider.GetService<IEmbeddingKnowledgeService>();
Index plain text
var indexRequest = new IndexTextRequest
{
Text = "This device supports Bluetooth 5.3 and WiFi 6E."
// optional: TenantId, SubTenantId, ScopeId, Source, CreatedBy, Tags, Config Overrides, etc
};
var indexResponse = await knowledgeService
.IndexAsync(indexRequest);
Console.WriteLine($"Total embeddings: {indexResponse.TotalEmbeddings}");
Console.WriteLine($"Total size: {indexResponse.TotalEmbeddingsSize}");
Index typed a knowledge entry (json embedding)
public class Product
{
public string Name { get; set; }
public string[] Features { get; set; }
}
var indexRequest = new IndexTextRequest<Product>
{
Text = new Product { Name = "SmartSensor 3000", Features = new[] { "Bluetooth 5.3", "WiFi 6E", "10-year battery" } };
// optional: TenantId, SubTenantId, ScopeId, Source, CreatedBy, Tags, Config Overrides, etc.
}
var indexResponse = await knowledgeService
.IndexAsync(indexRequest);
Console.WriteLine($"Total embeddings (typed): {indexResponse.TotalEmbeddings}");
Index a blob (document/audio/image/video)
var indexRequest = new IndexImageRequest
{
Blob = new ImageBlob
{
Data = new BlobDataBase64 { Base64 = "base64" }, // or File, Uri, Stream, etc.
MimeType = ImageMimeType.Png,
Metadata = new Metadata // If Metadata is null, it will be automatically retrieved from the blob if Metadata is configured in appsettings.
{
Title = "Quarterly Report Graph",
Description = "Q2 financial summary graph"
}
}
// optional: TenantId, SubTenantId, ScopeId, Source, CreatedBy, Tags, etc.
};
var indexResponse = await knowledgeService
.IndexAsync(indexRequest);
Console.WriteLine($"Indexed blob embeddings: {indexResponse.TotalEmbeddings}");
Console.WriteLine($"Metadata token usage: {indexResponse.MetadataTokenUsage?.InputTokens ?? 0}");
Search knowledge (semantic similarity)
var searchRequest = new SearchKnowledgeRequest
{
Query = "Which devices support WiFi 6E?",
Criteria = new KnowledgeCriteria
{
TenantId = "tenant-id",
// additional criteria
},
Limit = 5,
};
var searchResponse = await knowledgeService
.SearchAsync(searchRequest);
foreach (var result in searchResponse.Results)
{
Console.WriteLine($"Score: {result.Score:0.00} | Content: {result.Result.Content}");
}
Query knowledge (filtering / paging β no semantic scoring)
var queryRequest = new QueryKnowledgeRequest
{
Criteria = new KnowledgeCriteria
{
TenantId = "tenant-id",
// additional criteria
},
Limit = 10,
Skip = 0
};
var queryResponse = await knowledgeService
.QueryAsync(queryRequest);
foreach (var result in queryResponse.Results)
{
Console.WriteLine($"Id: {result.Result.Id} | Content size: {result.Size} bytes");
}
Delete specific knowledge by ID
var deleteRequest = new DeleteRequest
{
Ids = ["id"]
};
await embeddingMemoryService
.DeleteAsync(deleteRequest);
<br /><br />
ποΈ Metadata Service
The IMetadataService
provides structured metadata extraction from binary blob content such as images, audio, video, and documents. It uses a chat completion model with prompt templates to retrieve metadata automatically. The service supports both basic metadata (summary and description) and strongly-typed additional metadata. Every response also includes elapsed time, token usage, and internal error information, making it easy to track usage and performance.
You don't need to invoke metadata manually. If confiured it will automatically be invoked when embedding memories.
Flexible Metadata API
GetAsync(GetMetadataRequest request, CancellationToken cancellationToken)
Returns basic metadata only (Summary
andDescription
) insideMetadataResponse
. Wraps the generic overload withdynamic
.GetAsync<T>(GetMetadataRequest request, CancellationToken cancellationToken) where T : class, new()
Generic overload that returns strongly-typed additional metadata insideMetadataResponse<T>
. Always includes:ElapsedTime
β total processing timeTokenUsage
β input/output token countsErrorMessage
β internal error message if anyMetadata
β extracted summary and descriptionAdditionalMetadata
β strongly-typed metadata whenT
is provided
Blob Metadata Enrichment
- Attach blobs (PDFs, images, audio, video) to a metadata request.
- The service extracts summary and description automatically and, when using the generic overload, additional metadata according to your type
T
. - Works out-of-the-box if Metadata service is configured in
appsettings.json
; otherwise, you must provide blobs in the request.
Usage Notes
- All blob processing is asynchronous.
- Ensure that your type
T
has nullable properties for optional metadata fields.
βοΈ Metadata Configuration
Example appsettings.json
snippet showing how to configure IMetadataService
under the "Ai"
section:
"Ai": {
"Metadata": {
"Model": {
"Name": "<your-metadata-chat-model>",
}
"SummaryMaxWords": 30,
"DescriptionMaxWords": 90,
"Timeout": "00:01:00"
}
}
}
π Metadata Configuration Details
Setting | Type | Default | Description |
---|---|---|---|
Metadata | Metadata service configuration. | ||
Metadata.Model | Chat model configuration for metadata extraction. The model configuration is identical to Chat Model Configuration. The configured model may be overridden for individual requests. | ||
Metadata.SummaryMaxWords | int | 30 | The max words to include for metadata summary. |
Metadata.DescriptionMaxWords | int | 90 | The max words to include for metadata description. |
Metadata.Timeout | TimeSpan | 00:01:00 | Maximum time allowed for a metadata request. |
π Example Usage
Resolve the service from DI
var metadataService = serviceProvider.GetService<IMetadataService>();
Get metadata
public class InvoiceMetadata
{
public string InvoiceNumber { get; set; }
public DateTime? InvoiceDate { get; set; }
public decimal? TotalAmount { get; set; }
}
var metadataRequest = new GetMetadataRequest
{
Blob = new DocumentBlob
{
Data = new BlobDataBase64
{
Base64 = "base64"
},
MimeType = ImageMimeType.Jpg
}
};
var response = await metadataService
.GetAsync<InvoiceMetadata>(metadataRequest);
Console.WriteLine($"Summary: {response.Metadata.Summary}");
Console.WriteLine($"Description: {response.Metadata.Description}");
Console.WriteLine($"Invoice Number: {response.AdditionalMetadata.InvoiceNumber}");
Console.WriteLine($"Invoice Date: {response.AdditionalMetadata.InvoiceDate}");
Console.WriteLine($"Total Amount: {response.AdditionalMetadata.TotalAmount}");
<br /><br />
βοΈ Summarization Service
The ISummarizationService
provides memory summarization for questions and answers using an LLM chat completion service. It supports custom summarization degrees, leaving inline JSON or XML untouched. Every response includes elapsed time, token usage, and internal error information, making it easy to track performance and usage.
You don't need to invoke summarization manually. If confiured it will automatically be invoked when embedding memories.
β οΈ Note: Currently, summarization is only supported for memory embeddings.
Flexible Summarization API
SummarizeMemoryAsync(SummarizeMemoryRequest request, CancellationToken cancellationToken)
Summarizes a memory consisting of a question and answer. Returns aSummarizationMemoryResponse
containing:QuestionSummarized
β the summarized questionAnswerSummarized
β the summarized answerElapsedTime
β total processing timeTokenUsage
β input/output token countsErrorMessage
β internal error message if any
SummarizationDegree
Controls compression level:0
β No summarization25
β Preserve nearly all details50
β Keep core meaning, concise75
β Summarize concisely, remove fluff100
β Compress to most essential ideas only
Usage Notes
- All processing is asynchronous.
- Inline JSON or XML is preserved during summarization.
- Model parameters can be overridden in each request via
ChatModelParameters
.
βοΈ Summarization Configuration
Example appsettings.json
snippet showing how to configure ISummarizationService
under the "Ai"
section:
"Ai": {
"Summarization": {
"Model": {
"Name": "<your-summarization-chat-model>",
}
"SummarizationDegree": 25,
"Timeout": "00:01:00"
}
}
π Summarization Configuration Details
Setting | Type | Default | Description |
---|---|---|---|
Summarization | Summarization service configuration. | ||
Summarization.Model | Chat model configuration for summarization. The model configuration is identical to Chat Model Configuration. The configured model may be overridden for individual requests. | ||
Summarization.SummarizationDegree | int | 25 | Controls how aggressively content is summarized (0 - 100). |
Summarization.Timeout | TimeSpan | 00:01:00 | Maximum time allowed for a summarization request. |
π Example Usage
Resolve the service from DI
var summarizationService = serviceProvider.GetService<ISummarizationService>();
Get metadata
var summarizationRequest = new SummarizeMemoryRequest
{
Question = "What were the main points of the meeting?",
Answer = "We discussed the quarterly financials, the upcoming project deadlines, and team restructuring.",
SummarizationDegree = 50
};
var response = await summarizationService
.SummarizeMemoryAsync(summarizationRequest);
Console.WriteLine($"Question Summarized: {response.QuestionSummarized}");
Console.WriteLine($"Answer Summarized: {response.AnswerSummarized}");
<br /><br />
β‘ Core Service Concepts
π© Request/Response Pattern
- All services follow a request/response pattern, where requests contain input data and optional configuration, and responses return structured results along with metadata such as elapsed time and token usage.
- Responses may include additional strongly-typed data depending on the service (e.g., additional metadata or summarized content).
- Asynchronous processing is supported throughout to ensure non-blocking operations.
<br />
π§° Request Configuration Overrides
- While global defaults are configured in
appsettings.json
, certain configuration values can be overridden directly in a request. - This allows fine-grained control over individual operations without modifying the global configuration.
- Overrides can affect model parameters, timeouts, or other service-specific behavior depending on the request. <br />
β Error Handling
- Errors encountered during request processing (e.g., AI model failures, validation issues, or deserialization errors) are surfaced consistently across all services.
- When an error occurs in the AI model, an
AiException
is thrown containing the error message. - Developers can catch these exceptions to handle failures programmatically and log error details. <br />
π° Token & Performance Tracking
- Every response includes elapsed execution time for the request.
- Token usage is tracked for input and output operations across all services, including embeddings, metadata extraction, and summarization. E.g. the
ChatResponse
will return tokens used for both the chat request, as well as any tokens used for memory summarization and embedding, as well as blob metadata retrieval. Full token usage transparency. - Token and performance tracking helps with cost monitoring and provides reasoning transparency for automated operations. <br />
π οΈ Extensible Implementations
- All four services are implemented via interfaces, allowing developers to provide custom implementations if desired.
- Users can omit the default configuration section entirely and inject their own service logic while maintaining the same request/response patterns.
- This design ensures flexibility and extensibility for advanced or specialized use cases. <br />
π Health Checks
- Health-checks can be enabled for all services (models) in confiugration. When enabling and ASP.NET Core health-check middleware is configured in your application, each service will invoke periodic health requests to your models and ensure they are alive. The request simply invokes a prompt "ping", and expect to get one token back for success. <br />
π Observability
- All services integrate with the registered
ILoggerFactory
, ensuring that any logging performed by underlying components is consistent with your application's logging configuration and routed through your preferred providers. - This integration allows developers to capture logs, metrics, and diagnostic information provided by the underlying services without modifying the library.
- By leveraging the application's logging infrastructure, you get centralized monitoring, performance tracking, and diagnostic insights across all services. <br /> <br />
π‘ Other Highlighted Features
π Advanced Text Chunking
When storing embeddings in a vector store, the quality of retrieval depends heavily on how the original text is chunked.
This library includes an advanced text-chunking engine that goes far beyond simple paragraph or sentence splitting.
Key Features
- Paragraph-aware splitting β Text is first divided into paragraphs to keep logical boundaries intact.
- Mixed content handling β Embedded JSON or XML blocks are detected and treated as atomic units, preventing them from being broken into invalid fragments.
- Smart sentence detection β Sentences are split carefully, accounting for edge cases like abbreviations (
e.g.
,U.S.
), decimals (3.14
), and initials (J.R.R.
), so chunks donβt split in the wrong places. - Dynamic token-based merging β Sentences are merged into chunks based on configurable min/max token thresholds. This ensures chunks are neither too small (losing context) nor too large (exceeding embedding model limits). Oversized blocks (like large JSON/XML) are preserved as standalone chunks.
- Context-aware retrieval β Neighboring chunks can be retrieved alongside a target chunk, optionally restricted to the same paragraph, providing more coherent context for embeddings and downstream LLM calls.
Benefits
- Produces high-quality, semantically coherent chunks optimized for embeddings.
- Works reliably with mixed structured/unstructured content.
- Reduces duplicate or fragmented embeddings, improving retrieval accuracy.
- Easy to configure with
minTokensPerChunk
andmaxTokensPerChunk
settings.
<br /><br />
π§Ή Context Deduplication
When working with embeddings and vector search, itβs common to retrieve highly similar or duplicate results.
This library includes a context deduplication engine that automatically merges or removes near-duplicate results,
ensuring cleaner and more meaningful responses.
Key Features
- Semantic deduplication β Results with highly similar text (
similarityThreshold
, default0.90
) are merged into a single entry. - Blob-aware detection β If results reference the same underlying blob (file, document, etc.), they are automatically deduplicated by hash.
- Recency preference β When duplicates are found, the most recent result is kept while older context is merged into it.
- Memory Question/Answer pair collapsing β Questions and their corresponding answers are recognized and merged together, reducing redundancy.
- Configurable thresholds β Fine-tune the similarity threshold for different use cases (memory recall vs. knowledge retrieval).
Benefits
- Prevents duplicate or repetitive answers in retrieval.
- Keeps question/answer pairs clean and consistent.
- Improves retrieval accuracy by reducing noise in memory and knowledge results.
- Ensures the freshest and most relevant context is always retained. <br /><br /><br />
π Appendix
π Licensing
Vivet.AI has a dual license model with a community license for noncommercial use: Polyform Noncommercial 1.0.0. With this license Vivet.AI is free to use for personal/noncommercial use, A Commercial licenses, which includes support, is required for commercial use and can be purchased by sending a request to licensing@vivetonline.com
You can read the full Vivet.AI License here.
For guidance on setting up and using a commercial license, see Licensing.
<br /><br />
βοΈ Appsettings
Most settings have sensible defaults that work out of the box.
For minimal configuration, you only need to provide Endpoint, API Key, a vector store, and the model names to use.
Minimal Configuration without default values
{
"Ai": {
"Endpoint": "<your-endpoint>",
"ApiKey": "<youe-apikey>",
"Chat": {
"Model": {
"Name": "<youe-chat-model>",
}
},
"Embedding": {
"Model": {
"Name": "<youe-embedding-model>",
},
"Memory": {
"VectorStore": {
"Provider": "Qdrant",
"ApiKey": "secret"
}
},
"Knowledge": {
"VectorStore": {
"Provider": "Qdrant",
"ApiKey": "secret"
}
}
},
"Metadata": {
"Model": {
"Name": "<youe-chat-model>",
}
},
"Summarization": {
"Model": {
"Name": "<youe-chat-model>",
}
}
}
}
Full Configuration with default values
{
"Ai": {
"Endpoint": "<your-endpoint>",
"ApiKey": "<youe-apikey>",
"ApiKeyId": null,
"Chat": {
"Model": {
"Name": "<youe-chat-model>",
"UseHealthCheck": true,
"Parameters": {
"MaxOuputTokens": 2048,
"Temperature": null,
"StopSequences": [
],
"Seed": null,
"PresencePenalty": null,
"FrequencyPenalty": null,
"RepetitionPenalty": null,
"TopP": null,
"TopK": null,
"ReasoningEffort": null
}
},
"Timeout": "00:01:00",
"Memory": {
"RetentionInDays": 180,
"ContextQueryLimit": 3,
"CounterpartContextQueryLimit": 3,
"UseQueryDeduplication": true,
"DeduplicationMatchScoreThreshold": 0.90
},
"Knowledge": {
"ContextQueryLimit": 3,
"UseQueryDeduplication": true,
"DeduplicationMatchScoreThreshold": 0.90
},
"Plugins": {
"CustomPlugins": [
],
"BuiltInPlugins": {
"WebSearch": null
}
}
},
"Embedding": {
"Model": {
"Name": "<youe-embedding-model>",
"UseHealthCheck": true
},
"VectorSize": 1536,
"MatchScoreThreashold": 0.86,
"Timeout": "00:01:00",
"Memory": {
"UseExtendedMemoryContext": true,
"UseAutomaticSummarization": false,
"UseAutomaticMetadataRetrieval": true,
"TextChunking": {
"MinTokens": 20,
"MaxTokens": 60,
"NeighborContext": {
"ContextWindow": 1,
"RestrictToSameParagraph": true
}
},
"Scoring": {
"RecencyDecayStrategy": "Linear",
"RecencyBoostMax": 0.1,
"RecencyDecayDays": 30,
"RecencySigmoidSteepness": 1.0,
"ThreadMatchBoost": 0.2
},
"VectorStore": {
"Provider": "None",
"Host": "localhost",
"Port": 6334,
"Username": null,
"ApiKey": null,
"Timeout": "00:00:30",
"UseHealthCheck": true
}
},
"Knowledge": {
"UseAutomaticMetadataRetrieval": true,
"TextChunking": {
"MinTokens": 20,
"MaxTokens": 60,
"NeighborContext": {
"ContextWindow": 1,
"RestrictToSameParagraph": true
}
},
"Scoring": {
"RecencyDecayStrategy": "Linear",
"RecencyBoostMax": 0.1,
"RecencyDecayDays": 30,
"RecencySigmoidSteepness": 1.0
},
"VectorStore": {
"Provider": "None",
"Host": "localhost",
"Port": 6334,
"Username": null,
"ApiKey": null,
"Timeout": "00:00:30",
"UseHealthCheck": true
}
}
},
"Metadata": {
"Model": {
"Name": "<youe-chat-model>",
"UseHealthCheck": true,
"Parameters": {
"MaxOuputTokens": 2048,
"Temperature": null,
"StopSequences": [
],
"Seed": null,
"PresencePenalty": null,
"FrequencyPenalty": null,
"RepetitionPenalty": null,
"TopP": null,
"TopK": null,
"ReasoningEffort": null
}
},
"SummaryMaxWords": 30,
"DescriptionMaxWords": 90,
"Timeout": "00:01:00",
"Plugins": {
"CustomPlugins": [
]
}
},
"Summarization": {
"Model": {
"Name": "<youe-chat-model>",
"UseHealthCheck": true,
"Parameters": {
"MaxOuputTokens": 2048,
"Temperature": null,
"StopSequences": [
],
"Seed": null,
"PresencePenalty": null,
"FrequencyPenalty": null,
"RepetitionPenalty": null,
"TopP": null,
"TopK": null,
"ReasoningEffort": null
}
},
"SummarizationDegree": 25,
"Timeout": "00:01:00",
"Plugins": {
"CustomPlugins": [
]
}
}
}
}
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- FuzzySharp (>= 2.0.2)
- Microsoft.SemanticKernel (>= 1.65.0)
- Microsoft.SemanticKernel.Connectors.Amazon (>= 1.65.0-alpha)
- Microsoft.SemanticKernel.Connectors.AzureAIInference (>= 1.65.0-beta)
- Microsoft.SemanticKernel.Connectors.AzureAISearch (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.AzureOpenAI (>= 1.65.0)
- Microsoft.SemanticKernel.Connectors.Google (>= 1.65.0-alpha)
- Microsoft.SemanticKernel.Connectors.HuggingFace (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Ollama (>= 1.65.0-alpha)
- Microsoft.SemanticKernel.Connectors.OpenAI (>= 1.65.0)
- Microsoft.SemanticKernel.Connectors.PgVector (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Pinecone (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Qdrant (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Weaviate (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Plugins.Web (>= 1.65.0-alpha)
- Newtonsoft.Json (>= 13.0.4-beta1)
- System.Linq.Async (>= 6.0.3)
-
net9.0
- FuzzySharp (>= 2.0.2)
- Microsoft.SemanticKernel (>= 1.65.0)
- Microsoft.SemanticKernel.Connectors.Amazon (>= 1.65.0-alpha)
- Microsoft.SemanticKernel.Connectors.AzureAIInference (>= 1.65.0-beta)
- Microsoft.SemanticKernel.Connectors.AzureAISearch (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.AzureOpenAI (>= 1.65.0)
- Microsoft.SemanticKernel.Connectors.Google (>= 1.65.0-alpha)
- Microsoft.SemanticKernel.Connectors.HuggingFace (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Ollama (>= 1.65.0-alpha)
- Microsoft.SemanticKernel.Connectors.OpenAI (>= 1.65.0)
- Microsoft.SemanticKernel.Connectors.PgVector (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Pinecone (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Qdrant (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Connectors.Weaviate (>= 1.65.0-preview)
- Microsoft.SemanticKernel.Plugins.Web (>= 1.65.0-alpha)
- Newtonsoft.Json (>= 13.0.4-beta1)
- System.Linq.Async (>= 6.0.3)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last Updated |
---|---|---|
0.8.0-preview | 29 | 9/17/2025 |
- Preview release
- Free for non-commercial use. Commercial use requires a license