feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML
1.0.0
dotnet add package feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML --version 1.0.0
NuGet\Install-Package feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML -Version 1.0.0
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML" Version="1.0.0" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML --version 1.0.0
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
#r "nuget: feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML, 1.0.0"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML as a Cake Addin #addin nuget:?package=feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML&version=1.0.0 // Install feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML as a Cake Tool #tool nuget:?package=feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML&version=1.0.0
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
SemanticKernel.Connectors.OnnxRuntimeGenAI
Semantic Kernel connector for ONNX models.
How to use
Prerequisites
ONNX models, for example Phi-3 Mini-4K-Instruct
git lfs install
git clone https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx
Code
Create a new console app and add Nuget Package:
-- for CPU
feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.CPU
-- for CUDA
feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.CUDA
Then change Program.cs to:
Kernel kernel = Kernel.CreateBuilder()
.AddOnnxRuntimeGenAIChatCompletion(
modelPath: @"d:\Phi-3-mini-4k-instruct-onnx\cpu_and_mobile\cpu-int4-rtn-block-32-acc-level-4")
.Build();
string prompt = @"Write a joke";
await foreach (string text in kernel.InvokePromptStreamingAsync<string>(prompt,
new KernelArguments(new OnnxRuntimeGenAIPromptExecutionSettings() { MaxLength = 2048 })))
{
Console.Write(text);
}
使用说明
先决条件
你需要下载所需的ONNX模型,例如 `Phi-3 Mini-4K-Instruct
git lfs install
git clone https://hf-mirror.com/microsoft/Phi-3-mini-4k-instruct-onnx
示例代码
创建新的控制台应用,并根据你的硬件配置选择合适的Nuget包:
-- for CPU
feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.CPU
-- for CUDA
feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.CUDA
然后,只需几行代码,你就可以构建Kernel,并开始生成聊天内容:
Kernel kernel = Kernel.CreateBuilder()
.AddOnnxRuntimeGenAIChatCompletion(
modelPath: @"d:\Phi-3-mini-4k-instruct-onnx\cpu_and_mobile\cpu-int4-rtn-block-32-acc-level-4")
.Build();
string prompt = @"Write a joke";
await foreach (string text in kernel.InvokePromptStreamingAsync<string>(prompt,
new KernelArguments(new OnnxRuntimeGenAIPromptExecutionSettings() { MaxLength = 2048 })))
{
Console.Write(text);
}
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.1 is compatible. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
-
.NETStandard 2.1
- Microsoft.ML.OnnxRuntimeGenAI.DirectML (>= 0.2.0-rc4)
- Microsoft.SemanticKernel.Core (>= 1.10.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on feiyun0112.SemanticKernel.Connectors.OnnxRuntimeGenAI.DirectML:
Repository | Stars |
---|---|
lindexi/lindexi_gd
博客用到的代码
|
Version | Downloads | Last updated |
---|---|---|
1.0.0 | 202 | 5/9/2024 |