vTSafeKernelInvoker 1.11.14

dotnet add package vTSafeKernelInvoker --version 1.11.14
                    
NuGet\Install-Package vTSafeKernelInvoker -Version 1.11.14
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="vTSafeKernelInvoker" Version="1.11.14" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="vTSafeKernelInvoker" Version="1.11.14" />
                    
Directory.Packages.props
<PackageReference Include="vTSafeKernelInvoker" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add vTSafeKernelInvoker --version 1.11.14
                    
#r "nuget: vTSafeKernelInvoker, 1.11.14"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package vTSafeKernelInvoker@1.11.14
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=vTSafeKernelInvoker&version=1.11.14
                    
Install as a Cake Addin
#tool nuget:?package=vTSafeKernelInvoker&version=1.11.14
                    
Install as a Cake Tool

vTSafeKernelInvoker

vTSafeKernelInvoker is a lightweight .NET extension for Semantic Kernel that introduces the method InvokePromptFunctionUsingCustomizedKernelAsync. This method helps reduce AI service token usage and cost by avoiding unnecessary AI post-processing of plugin results.

Curious about Semantic Kernel? Explore the official overview here: https://learn.microsoft.com/en-us/semantic-kernel/overview/

Why Use This Package?

When using standard Semantic Kernel methods like InvokePromptAsync or GetChatMessageContentAsync (via IChatCompletionService) etc, the plugin result is sent back to the AI for additional processing such as formatting, filtering, or styling. This increases output tokens and raises costs—especially for large responses.

In contrast, InvokePromptFunctionUsingCustomizedKernelAsync bypasses this extra step. It returns the plugin output directly to the user, keeping output token usage minimal regardless of the data size. This makes it ideal for performance-critical or cost-sensitive applications.

How It Works

Standard Semantic Kernel Flow:

  1. Send prompt to AI
  2. AI plans which plugin to call
  3. Plugin executes and returns data
  4. Plugin result sent back to AI for formatting (Extra tokens!)
  5. AI returns formatted response

vTSafeKernelInvoker Flow:

  1. Send prompt to AI
  2. AI plans which plugin to call
  3. Plugin executes and returns data
  4. Result returned directly to user (No extra tokens!)

Token Usage: Input and output tokens are charged only for deciding which plugin function to call. Once the plugin executes, no additional tokens are consumed regardless of the data size returned.

Note: Token usage for planning will increase if you have more plugins, functions, or parameters in your kernel, as the AI needs more context to make decisions.

Key Benefits

  • Lower Costs: Up to 85% reduction in token usage
  • Faster Response: No extra AI processing step
  • Same Functionality: Works with your existing Semantic Kernel plugins
  • Predictable Costs: Tokens are only charged for plugin planning, not for data size returned

Real Cost Comparison (if using GPT-3.5-Turbo)

Example: "Get top 10 employees"

using Kernel's standard method InvokePromptAsync :

  • Input tokens: 5635
  • Output tokens: 733
  • Total tokens: 6368
  • Estimated charge: $0.00392

 

using this package's method InvokePromptFunctionUsingCustomizedKernelAsync :

  • Input tokens: 1021
  • Output tokens: 73
  • Total tokens: 1094
  • Estimated charge: $0.00062

 
 

Example: "Get top 20 employees"

using Kernel's standard method InvokePromptAsync :

  • Input tokens: 5635
  • Output tokens: 1462
  • Total tokens: 7097
  • Estimated charge: $0.014194

 

using this package's method InvokePromptFunctionUsingCustomizedKernelAsync :

  • Input tokens: 1021
  • Output tokens: 73
  • Total tokens: 1094
  • Estimated charge: $0.00062

 
 

Notice: With larger datasets, savings increase dramatically!

Installation

dotnet add package vTSafeKernelInvoker

Basic Usage

using vT.SafeKernelInvoker;

// Instead of this expensive method:
// var result = await kernel.InvokePromptAsync("Get top 10 employees");

// Use this cost-effective method:
var result = await kernel.InvokePromptFunctionUsingCustomizedKernelAsync(
    "Get top 10 employees"
);

// Result is raw plugin output - no AI formatting, maximum savings!

When Should You Use This?

Perfect for:

  • Data retrieval operations
  • Database queries
  • Report generation
  • API calls that return structured data
  • Cost-sensitive applications
  • Large data responses

Not ideal for:

  • When you need AI to format or style the output
  • Creative writing tasks
  • Simple conversational responses

Getting Started

  1. Install the package
  2. Add using vT.SafeKernelInvoker;
  3. Replace InvokePromptAsync with InvokePromptFunctionUsingCustomizedKernelAsync
  4. Enjoy dramatically lower costs!

Example Code


using vT.SafeKernelInvoker;
using Microsoft.SemanticKernel;
using Microsoft.Extensions.DependencyInjection;

var builder = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion(
    deploymentName: "YOUR_DEPLOYMENT_NAME",
    endpoint: "YOUR_AZURE_ENDPOINT",
    apiKey: "YOUR_API_KEY"
);

builder.Plugins.AddFromType<YourPlugin>();
//builder.Services.AddScoped<IYourService, YourService>();  //If you want to use a service layer

var kernel = builder.Build();

var result = await kernel.InvokePromptFunctionUsingCustomizedKernelAsync(
    "Get top 10 employees"
);

Console.WriteLine(result);

To create a Kernel Plugin class, as shown in the code snippet builder.Plugins.AddFromType<YourPlugin>(), refer to the official documentation here: https://learn.microsoft.com/en-us/semantic-kernel/concepts/plugins/?pivots=programming-language-csharp

License

MIT License


Perfect for: Developers who want to optimize AI costs without sacrificing functionality.

Product Compatible and additional computed target framework versions.
.NET net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.11.14 214 8/5/2025