Universal.OpenAI.Client
4.0.0
dotnet add package Universal.OpenAI.Client --version 4.0.0
NuGet\Install-Package Universal.OpenAI.Client -Version 4.0.0
<PackageReference Include="Universal.OpenAI.Client" Version="4.0.0" />
<PackageVersion Include="Universal.OpenAI.Client" Version="4.0.0" />
<PackageReference Include="Universal.OpenAI.Client" />
paket add Universal.OpenAI.Client --version 4.0.0
#r "nuget: Universal.OpenAI.Client, 4.0.0"
#:package Universal.OpenAI.Client@4.0.0
#addin nuget:?package=Universal.OpenAI.Client&version=4.0.0
#tool nuget:?package=Universal.OpenAI.Client&version=4.0.0
Universal.OpenAI.Client
A simple .NET client library for OpenAI's API, supporting commonly used features including Chat Completions, Responses API, streaming, tools, structured outputs, and more.
Quick Start
using Universal.OpenAI.Client;
using Universal.OpenAI.Client.V1.Chat;
using var client = new OpenAIClient("your-api-key");
var response = await client.V1.Chat.CreateCompletionAsync(new CreateCompletionRequest
{
Model = Models.Gpt4o,
Messages = [
new SystemMessage("You are a helpful assistant."),
new UserMessage("Hello, how are you?")
]
});
Console.WriteLine(response.Choices[0].Message.Content);
Features
Chat Completions
Create chat completions with support for various message types:
var response = await client.V1.Chat.CreateCompletionAsync(new CreateCompletionRequest
{
Model = Models.Gpt4o,
Messages = [
new SystemMessage("You are a helpful assistant."),
new UserMessage("Tell me a joke."),
new AssistantMessage("Why don't scientists trust atoms? Because they make up everything!"),
new UserMessage("That's funny! Tell me another one.")
]
});
Streaming Responses
Get real-time streaming responses:
await foreach (var chunk in client.V1.Chat.CreateCompletionStreamingAsync(new CreateCompletionRequest
{
Model = Models.Gpt4o,
Messages = [new UserMessage("Write a short story.")],
Stream = true
}))
{
if (chunk.Choices?[0]?.Delta?.Content != null)
{
Console.Write(chunk.Choices[0].Delta.Content);
}
}
Structured JSON Output
Enforce JSON responses with custom schemas:
var response = await client.V1.Chat.CreateCompletionAsync(new CreateCompletionRequest
{
Model = "gpt-4o-2024-08-06",
Messages = [new UserMessage("Analyze this text for sentiment.")],
ResponseFormat = ResponseFormat.FromJsonSchema("sentiment_analysis", new JsonSchema
{
Type = "object",
Properties = new Dictionary<string, JsonSchema>
{
["sentiment"] = new JsonSchema
{
Type = "string",
Description = "The sentiment of the text",
Enum = new[] { "positive", "negative", "neutral" }
},
["confidence"] = new JsonSchema
{
Type = "number",
Description = "Confidence score between 0 and 1"
}
},
Required = new[] { "sentiment", "confidence" }
})
});
Or use simple JSON object mode:
var response = await client.V1.Chat.CreateCompletionAsync(new CreateCompletionRequest
{
Model = Models.Gpt4Turbo,
Messages = [new UserMessage("Return a JSON object with weather data.")],
ResponseFormat = ResponseFormat.JsonObject
});
Function Calling & Tools
Use tools to extend the model's capabilities:
var response = await client.V1.Chat.CreateCompletionAsync(new CreateCompletionRequest
{
Model = "gpt-4o-2024-08-06",
Messages = [new UserMessage("What's the weather like in San Francisco?")],
Tools = [
new Tool
{
Type = ToolTypes.Function,
Function = new Function
{
Name = "get_weather",
Description = "Get current weather for a location",
Parameters = new JsonSchema
{
Type = "object",
Properties = new Dictionary<string, JsonSchema>
{
["location"] = new JsonSchema
{
Type = "string",
Description = "The city and state"
}
},
Required = new[] { "location" }
}
}
}
]
});
// Handle tool calls
if (response.Choices[0].Message is AssistantMessage assistantMsg &&
assistantMsg.ToolCalls?.Length > 0)
{
var toolCall = assistantMsg.ToolCalls[0];
var weatherData = GetWeather(toolCall.Function.Arguments); // Your implementation
// Continue conversation with tool response
var followUp = await client.V1.Chat.CreateCompletionAsync(new CreateCompletionRequest
{
Model = "gpt-4o-2024-08-06",
Messages = [
new UserMessage("What's the weather like in San Francisco?"),
assistantMsg,
new ToolMessage(weatherData, toolCall.Id)
]
});
}
Responses API
Use OpenAI's unified Responses API for enhanced capabilities:
var response = await client.V1.ResponsesClient.CreateResponseAsync(new CreateResponseRequest
{
Model = Models.Gpt4o,
Input = "Tell me about the latest developments in AI."
});
if (response.Output.First() is MessageOutput messageOutput)
{
var textContent = messageOutput.Content.OfType<OutputTextContent>().First();
Console.WriteLine(textContent.Text);
}
Embeddings
Generate text embeddings:
var embeddings = await client.V1.Embeddings.CreateEmbeddingsAsync(new CreateEmbeddingsRequest
{
Model = "text-embedding-3-small",
Input = "The quick brown fox jumps over the lazy dog."
});
Image Generation
Create images from text prompts:
var images = await client.V1.ImagesClient.CreateImageAsync(new CreateImageRequest
{
Prompt = "A sunset over the mountains",
Size = "1024x1024",
Quality = "standard"
});
Error Handling
The library throws HttpException for API errors:
try
{
var response = await client.V1.Chat.CreateCompletionAsync(request);
}
catch (HttpException ex)
{
Console.WriteLine($"API Error: {ex.StatusCode} - {ex.Message}");
}
Disposal
The client implements IDisposable:
using var client = new OpenAIClient("your-api-key");
// Client will be automatically disposed
Advanced Usage
Raw Streaming Response
using var httpResponse = await client.V1.Chat.CreateCompletionStreamingRawAsync(request);
// Forward the raw HTTP response stream
Specific Clients
using var chatClient = new ChatClient(apiKey);
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
| .NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
| .NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
| .NET Framework | net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
| MonoAndroid | monoandroid was computed. |
| MonoMac | monomac was computed. |
| MonoTouch | monotouch was computed. |
| Tizen | tizen40 was computed. tizen60 was computed. |
| Xamarin.iOS | xamarinios was computed. |
| Xamarin.Mac | xamarinmac was computed. |
| Xamarin.TVOS | xamarintvos was computed. |
| Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.0
- Microsoft.Bcl.AsyncInterfaces (>= 9.0.9)
- Newtonsoft.Json (>= 13.0.4)
- Universal.Common.Json (>= 1.5.0)
- Universal.Common.Net.Http (>= 5.1.0)
- Universal.Common.Serialization (>= 2.4.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Rewrite to organize into namespaces.
Implemented support for streaming chat completions.
Implemented partial for the Responses API (no streaming support yet, and not all sub-objects have strongly-typed equivalents).
Restored support for image generation and embeddings.
Updated available model constants.