MistralSharp 1.0.3
dotnet add package MistralSharp --version 1.0.3
NuGet\Install-Package MistralSharp -Version 1.0.3
<PackageReference Include="MistralSharp" Version="1.0.3" />
paket add MistralSharp --version 1.0.3
#r "nuget: MistralSharp, 1.0.3"
// Install MistralSharp as a Cake Addin #addin nuget:?package=MistralSharp&version=1.0.3 // Install MistralSharp as a Cake Tool #tool nuget:?package=MistralSharp&version=1.0.3
MistralSharp
About
MistralSharp is an unofficial .NET SDK for the Mistral AI Platform. Great for building AI-enhanced apps!
Features
- Implements all Mistral AI Platform REST API endpoints.
- Targets .NET Standard 2.0 and .NET 8.
Usage
Start by downloading the nuget package and importing it into your project.
Check out the Sample project to see an example of how to use the library in a simple console application.
To access the API endpoints, create a new instance of the MistralClient
class and pass in your API key:
var mistralClient = new MistralClient(apiKey);
Endpoints
GetAvailableModelsAsync()
This endpoint returns a list of available AI models on the Mistral platform.
var models = await mistralClient.GetAvailableModelsAsync();
ChatAsync()
This method allows you to chat with an AI model of your choice. To start a chat, first create a new ChatRequest
object (note: only Model and Messages are required, the other parameters are optional and will default to the values
specified below):
var chatRequest = new ChatRequest()
{
// The ID of the model to use. You can use GetAvailableModelsAsync() to get the list of available models
Model = ModelType.MistralMedium,
// Pass a list of messages to the model.
// The role can either be "user" or "agent"
// Content is the message content
Messages =
[
new Message()
{
Role = "user",
Content = "How can Mistral AI assist programmers?"
}
],
//The maximum number of tokens to generate in the completion.
// The token count of your prompt plus max_tokens cannot exceed the model's context length.
MaxTokens = 64,
// Default: 0.7
// What sampling temperature to use, between 0.0 and 2.0.
// Higher values like 0.8 will make the output more random, while lower values like 0.2 will make
// it more focused and deterministic.
Temperature = 0.7,
// Default: 1
// Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
// So 0.1 means only the tokens comprising the top 10% probability mass are considered.
// Mistral generally recommends altering this or temperature but not both.
TopP = 1,
// Default: false
// Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events
// as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will
// hold the request open until the timeout or until completion, with the response containing the full
// result as JSON.
Stream = false,
// Default: false
// Whether to inject a safety prompt before all conversations.
SafePrompt = false,
// Default: null
// The seed to use for random sampling. If set, different calls will generate deterministic results.
RandomSeed = null
};
Finally, call the ChatAsync()
method and pass the ChatRequest
object:
var sampleChat = await mistralClient.ChatAsync(chatRequest);
ChatStreamAsync()
Operates the same as ChatAsync()
except with support for streaming back partial progress
(ChatRequest.Stream set to true). Returns an IAsyncEnumerable<ChatResponse>
.
NOTE: This will implemented in an upcoming release as it's still being worked on.
CreateEmbeddingsAsync()
The embeddings API allows you to embed sentences and can be used to power a RAG application. To use it, first
create a create a new EmbeddingRequest
object:
var embeddings = new EmbeddingRequest()
{
// The ID of the model to use for this request.
Model = ModelType.MistralEmbed,
// The format of the output data.
EncodingFormat = "float",
// The list of strings to embed.
Input = new List<string>()
{
"Hello",
"World"
}
};
Lastly, pass the EmbeddingRequest
object to ChatEmbeddingsAsync()
method:
var embeddedResponse = await mistralClient.CreateEmbeddingsAsync(embeddings);
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.0
- System.Text.Json (>= 8.0.0)
-
net8.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
1.0.3 - 2024-01-15
- Renamed parameter `SafeMode` as `SafePrompt` - Mistral REST API was throwing a 422 since it has been made stricter
(See https://discord.com/channels/1144547040454508606/1184444810279522374/1195108690353717369)
- Replaced instances of List<> with IEnumerable<> for improved performance.