tryAGI.OpenAI
4.2.1-dev.179
Prefix Reserved
See the version list below for details.
dotnet add package tryAGI.OpenAI --version 4.2.1-dev.179
NuGet\Install-Package tryAGI.OpenAI -Version 4.2.1-dev.179
<PackageReference Include="tryAGI.OpenAI" Version="4.2.1-dev.179" />
<PackageVersion Include="tryAGI.OpenAI" Version="4.2.1-dev.179" />
<PackageReference Include="tryAGI.OpenAI" />
paket add tryAGI.OpenAI --version 4.2.1-dev.179
#r "nuget: tryAGI.OpenAI, 4.2.1-dev.179"
#:package tryAGI.OpenAI@4.2.1-dev.179
#addin nuget:?package=tryAGI.OpenAI&version=4.2.1-dev.179&prerelease
#tool nuget:?package=tryAGI.OpenAI&version=4.2.1-dev.179&prerelease
OpenAI
Features 🔥
- Fully generated C# SDK based on official OpenAI OpenAPI specification using AutoSDK
- Same day update to support new features
- Updated and supported automatically if there are no breaking changes
- Contains a supported list of constants such as current prices, models, and other
- Source generator to define functions natively through C# interfaces
- All modern .NET features - nullability, trimming, NativeAOT, etc.
- Support .Net Framework/.Net Standard 2.0
- Support all OpenAI API endpoints including completions, chat, embeddings, images, assistants and more.
- Regularly tested for compatibility with popular custom providers like OpenRouter/DeepSeek/Ollama/LM Studio and many others
- Microsoft.Extensions.AI
IChatClientandIEmbeddingGeneratorsupport for OpenAI and all CustomProviders
Documentation
Examples and documentation can be found here: https://tryagi.github.io/OpenAI/
Usage
using var api = new OpenAiApi("API_KEY");
string response = await api.Chat.CreateChatCompletionAsync(
messages: ["Generate five random words."],
model: ModelIdsSharedEnum.Gpt4oMini);
Console.WriteLine(response); // "apple, banana, cherry, date, elderberry"
var enumerable = api.Chat.CreateChatCompletionAsStreamAsync(
messages: ["Generate five random words."],
model: ModelIdsSharedEnum.Gpt4oMini);
await foreach (string response in enumerable)
{
Console.WriteLine(response);
}
It uses three implicit conversions:
- from
stringtoChatCompletionRequestUserMessage. It will always be converted to the user message. - from
ChatCompletionResponseMessagetostring. It will always contain the first choice message content. - from
CreateChatCompletionStreamResponsetostring. It will always contain the first delta content.
You still can use the full response objects if you need more information, just replace string response to var response.
Tools
using OpenAI;
using CSharpToJsonSchema;
public enum Unit
{
Celsius,
Fahrenheit,
}
public class Weather
{
public string Location { get; set; } = string.Empty;
public double Temperature { get; set; }
public Unit Unit { get; set; }
public string Description { get; set; } = string.Empty;
}
[GenerateJsonSchema(Strict = true)] // false by default. You can't use parameters with default values in Strict mode.
public interface IWeatherFunctions
{
[Description("Get the current weather in a given location")]
public Task<Weather> GetCurrentWeatherAsync(
[Description("The city and state, e.g. San Francisco, CA")] string location,
Unit unit,
CancellationToken cancellationToken = default);
}
public class WeatherService : IWeatherFunctions
{
public Task<Weather> GetCurrentWeatherAsync(string location, Unit unit = Unit.Celsius, CancellationToken cancellationToken = default)
{
return Task.FromResult(new Weather
{
Location = location,
Temperature = 22.0,
Unit = unit,
Description = "Sunny",
});
}
}
using var api = new OpenAiApi("API_KEY");
var service = new WeatherService();
var tools = service.AsTools().AsOpenAiTools();
var messages = new List<ChatCompletionRequestMessage>
{
"You are a helpful weather assistant.".AsSystemMessage(),
"What is the current temperature in Dubai, UAE in Celsius?".AsUserMessage(),
};
var model = ModelIdsSharedEnum.Gpt4oMini;
var result = await api.Chat.CreateChatCompletionAsync(
messages,
model: model,
tools: tools);
var resultMessage = result.Choices.First().Message;
messages.Add(resultMessage.AsRequestMessage());
foreach (var call in resultMessage.ToolCalls)
{
var json = await service.CallAsync(
functionName: call.Function.Name,
argumentsAsJson: call.Function.Arguments);
messages.Add(json.AsToolMessage(call.Id));
}
var result = await api.Chat.CreateChatCompletionAsync(
messages,
model: model,
tools: tools);
var resultMessage = result.Choices.First().Message;
messages.Add(resultMessage.AsRequestMessage());
> System:
You are a helpful weather assistant.
> User:
What is the current temperature in Dubai, UAE in Celsius?
> Assistant:
call_3sptsiHzKnaxF8bs8BWxPo0B:
GetCurrentWeather({"location":"Dubai, UAE","unit":"celsius"})
> Tool(call_3sptsiHzKnaxF8bs8BWxPo0B):
{"location":"Dubai, UAE","temperature":22,"unit":"celsius","description":"Sunny"}
> Assistant:
The current temperature in Dubai, UAE is 22°C with sunny weather.
Structured Outputs
using OpenAI;
using var api = new OpenAiApi("API_KEY");
var response = await api.Chat.CreateChatCompletionAsAsync<Weather>(
messages: ["Generate random weather."],
model: ModelIdsSharedEnum.Gpt4oMini,
jsonSerializerOptions: new JsonSerializerOptions
{
Converters = {new JsonStringEnumConverter()},
});
// or (if you need trimmable/NativeAOT version)
var response = await api.Chat.CreateChatCompletionAsAsync(
jsonTypeInfo: SourceGeneratedContext.Default.Weather,
messages: ["Generate random weather."],
model: ModelIdsSharedEnum.Gpt4oMini);
// response.Value1 contains the structured output
// response.Value2 contains the CreateChatCompletionResponse object
Weather:
Location: San Francisco, CA
Temperature: 65
Unit: Fahrenheit
Description: Partly cloudy with a light breeze and occasional sunshine.
Raw Response:
{"Location":"San Francisco, CA","Temperature":65,"Unit":"Fahrenheit","Description":"Partly cloudy with a light breeze and occasional sunshine."}
Additional code for trimmable/NativeAOT version:
[JsonSourceGenerationOptions(Converters = [typeof(JsonStringEnumConverter<Unit>)])]
[JsonSerializable(typeof(Weather))]
public partial class SourceGeneratedContext : JsonSerializerContext;
Custom providers
using OpenAI;
using var api = CustomProviders.GitHubModels("GITHUB_TOKEN");
using var api = CustomProviders.Azure("API_KEY", "ENDPOINT");
using var api = CustomProviders.DeepInfra("API_KEY");
using var api = CustomProviders.Groq("API_KEY");
using var api = CustomProviders.XAi("API_KEY");
using var api = CustomProviders.DeepSeek("API_KEY");
using var api = CustomProviders.Fireworks("API_KEY");
using var api = CustomProviders.OpenRouter("API_KEY");
using var api = CustomProviders.Together("API_KEY");
using var api = CustomProviders.Perplexity("API_KEY");
using var api = CustomProviders.SambaNova("API_KEY");
using var api = CustomProviders.Mistral("API_KEY");
using var api = CustomProviders.Codestral("API_KEY");
using var api = CustomProviders.Cerebras("API_KEY");
using var api = CustomProviders.Cohere("API_KEY");
using var api = CustomProviders.Ollama();
using var api = CustomProviders.LmStudio();
Microsoft.Extensions.AI
The client natively implements IChatClient and IEmbeddingGenerator<string, Embedding<float>> from Microsoft.Extensions.AI, providing a unified interface across 15+ providers:
using OpenAI;
using Microsoft.Extensions.AI;
// Works with OpenAI and all CustomProviders (Azure, DeepSeek, Groq, etc.)
using var client = new OpenAiClient("API_KEY");
// or: using var client = CustomProviders.Groq("API_KEY");
// IChatClient
IChatClient chatClient = client;
var response = await chatClient.GetResponseAsync(
"Say hello!",
new ChatOptions { ModelId = "gpt-4o-mini" });
Console.WriteLine(response.Messages[0].Text);
// Streaming
await foreach (var update in chatClient.GetStreamingResponseAsync(
"Count to 5.",
new ChatOptions { ModelId = "gpt-4o-mini" }))
{
Console.Write(string.Concat(update.Contents.OfType<TextContent>().Select(c => c.Text)));
}
// IEmbeddingGenerator
IEmbeddingGenerator<string, Embedding<float>> generator = client;
var embeddings = await generator.GenerateAsync(
["Hello, world!"],
new EmbeddingGenerationOptions { ModelId = "text-embedding-3-small" });
Constants
All tryGetXXX methods return null if the value is not found.
There also non-try methods that throw an exception if the value is not found.
using OpenAI;
// You can try to get the enum from string using:
var model = ModelIdsSharedEnumExtensions.ToEnum("gpt-4o") ?? throw new Exception("Invalid model");
// Chat
var model = ModelIdsSharedEnum.Gpt4oMini;
double? priceInUsd = model.TryGetPriceInUsd(
inputTokens: 500,
outputTokens: 500)
double? priceInUsd = model.TryGetFineTunePriceInUsd(
trainingTokens: 500,
inputTokens: 500,
outputTokens: 500)
int contextLength = model.TryGetContextLength() // 128_000
int outputLength = model.TryGetOutputLength() // 16_000
// Embeddings
var model = CreateEmbeddingRequestModel.TextEmbedding3Small;
int? maxInputTokens = model.TryGetMaxInputTokens() // 8191
double? priceInUsd = model.TryGetPriceInUsd(tokens: 500)
// Images
double? priceInUsd = CreateImageRequestModel.DallE3.TryGetPriceInUsd(
size: CreateImageRequestSize.x1024x1024,
quality: CreateImageRequestQuality.Hd)
// Speech to Text
double? priceInUsd = CreateTranscriptionRequestModel.Whisper1.TryGetPriceInUsd(
seconds: 60)
// Text to Speech
double? priceInUsd = CreateSpeechRequestModel.Tts1Hd.TryGetPriceInUsd(
characters: 1000)
Chat Completion
Send a simple chat completion request.
using var client = new OpenAiClient(apiKey);
string response = await client.Chat.CreateChatCompletionAsync(
new CreateChatCompletionRequest
{
Value2 = new CreateChatCompletionRequestVariant2
{
Messages = ["Generate five random words."],
Model = "gpt-4o-mini",
}
});
Console.WriteLine(response);
Chat Completion Streaming
Stream a chat completion response token by token.
using var client = new OpenAiClient(apiKey);
var enumerable = client.Chat.CreateChatCompletionAsStreamAsync(
new CreateChatCompletionRequest
{
Value2 = new CreateChatCompletionRequestVariant2
{
Messages = ["Generate five random words."],
Model = "gpt-4o-mini",
}
});
await foreach (string response in enumerable)
{
Console.Write(response);
}
Chat With Vision
Send an image to the model for analysis.
using var client = new OpenAiClient(apiKey);
CreateChatCompletionResponse response = await client.Chat.CreateChatCompletionAsync(
new CreateChatCompletionRequest
{
Value2 = new CreateChatCompletionRequestVariant2
{
Messages = [
"Please describe the following image.",
H.Resources.images_dog_and_cat_png.AsBytes().AsUserMessage(mimeType: "image/png"),
],
Model = "gpt-4o-mini",
}
});
Console.WriteLine(response.Choices[0].Message.Content);
JSON Response Format
Request a response in JSON format.
using var client = new OpenAiClient(apiKey);
string response = await client.Chat.CreateChatCompletionAsync(
new CreateChatCompletionRequest
{
Value2 = new CreateChatCompletionRequestVariant2
{
Messages = ["Generate five random words as json."],
Model = "gpt-4o-mini",
ResponseFormat = new ResponseFormatJsonObject
{
Type = ResponseFormatJsonObjectType.JsonObject,
},
}
});
Console.WriteLine(response);
Structured Outputs
Get structured JSON responses using a C# type as the schema.
using var client = new OpenAiClient(apiKey);
var response = await client.Chat.CreateChatCompletionAsAsync<WordsResponse>(
messages: ["Generate five random words as json."],
model: "gpt-4o-mini");
Console.WriteLine("Words:");
foreach (var word in response.Value1!.Words)
{
Console.WriteLine(word);
}
Structured Outputs (AOT)
Get structured JSON responses using a JsonTypeInfo for AOT/trimming compatibility.
using var client = new OpenAiClient(apiKey);
var response = await client.Chat.CreateChatCompletionAsAsync(
jsonTypeInfo: SourceGeneratedContext.Default.WordsResponse,
messages: ["Generate five random words."],
model: "gpt-4o-mini");
Console.WriteLine("Words:");
foreach (var word in response.Value1!.Words)
{
Console.WriteLine(word);
}
Embeddings
Create a text embedding vector.
using var client = new OpenAiClient(apiKey);
var response = await client.Embeddings.CreateEmbeddingAsync(
input: "Hello, world",
model: CreateEmbeddingRequestModel.TextEmbedding3Small);
foreach (var data in response.Data.ElementAt(0).Embedding1)
{
Console.WriteLine($"{data}");
}
Image Generation
Generate an image from a text prompt.
using var client = new OpenAiClient(apiKey);
var response = await client.Images.CreateImageAsync(
prompt: "a white siamese cat",
model: CreateImageRequestModel.GptImage1Mini,
n: 1,
quality: CreateImageRequestQuality.Low,
size: CreateImageRequestSize.x1024x1024,
outputFormat: CreateImageRequestOutputFormat.Png);
var base64 = response.Data?.ElementAt(0).B64Json;
Console.WriteLine($"Generated image ({base64?.Length} base64 chars)");
Text To Speech
Convert text to speech audio using streaming.
using var client = new OpenAiClient(apiKey);
using var memoryStream = new MemoryStream();
await foreach (var streamEvent in client.Audio.CreateSpeechAsync(
model: CreateSpeechRequestModel.Gpt4oMiniTts,
input: "Hello! This is a text-to-speech test.",
voice: (VoiceIdsShared)VoiceIdsSharedEnum.Alloy,
responseFormat: CreateSpeechRequestResponseFormat.Mp3,
speed: 1.0,
streamFormat: CreateSpeechRequestStreamFormat.Sse))
{
if (streamEvent.SpeechAudioDelta is { } delta)
{
byte[] chunk = Convert.FromBase64String(delta.Audio);
memoryStream.Write(chunk, 0, chunk.Length);
}
}
byte[] audio = memoryStream.ToArray();
Console.WriteLine($"Generated {audio.Length} bytes of audio.");
List Models
List all available models.
using var client = new OpenAiClient(apiKey);
var models = await client.Models.ListModelsAsync();
foreach (var model in models.Data)
{
Console.WriteLine(model.Id);
}
Moderation
Check text for policy violations using the moderation endpoint.
using var client = new OpenAiClient(apiKey);
var response = await client.Moderations.CreateModerationAsync(
input: "Hello, world",
model: CreateModerationRequestModel.OmniModerationLatest);
Console.WriteLine($"Flagged: {response.Results.First().Flagged}");
MEAI Chat Completion
Use the Microsoft.Extensions.AI IChatClient interface for chat completions.
using var client = new OpenAiClient(apiKey);
// using Meai = Microsoft.Extensions.AI;
Meai.IChatClient chatClient = client;
var messages = new List<Meai.ChatMessage>
{
new(Meai.ChatRole.User, "Say hello in exactly 3 words."),
};
var response = await chatClient.GetResponseAsync(
messages,
new Meai.ChatOptions { ModelId = "gpt-4o-mini" });
Console.WriteLine(response.Messages[0].Text);
MEAI Chat Streaming
Stream a chat completion using the Microsoft.Extensions.AI IChatClient interface.
using var client = new OpenAiClient(apiKey);
// using Meai = Microsoft.Extensions.AI;
Meai.IChatClient chatClient = client;
var messages = new List<Meai.ChatMessage>
{
new(Meai.ChatRole.User, "Count from 1 to 5."),
};
await foreach (var update in chatClient.GetStreamingResponseAsync(
messages,
new Meai.ChatOptions { ModelId = "gpt-4o-mini" }))
{
var text = string.Concat(update.Contents.OfType<Meai.TextContent>().Select(c => c.Text));
if (!string.IsNullOrEmpty(text))
{
Console.Write(text);
}
}
MEAI Tool Calling
Use function/tool calling via the Microsoft.Extensions.AI IChatClient interface.
using var client = new OpenAiClient(apiKey);
// using Meai = Microsoft.Extensions.AI;
Meai.IChatClient chatClient = client;
var tool = Meai.AIFunctionFactory.Create(
(string city) => city switch
{
"Paris" => "22C, sunny",
"London" => "15C, cloudy",
_ => "Unknown",
},
name: "GetWeather",
description: "Gets the current weather for a city");
var chatOptions = new Meai.ChatOptions
{
ModelId = "gpt-4o-mini",
Tools = [tool],
};
var messages = new List<Meai.ChatMessage>
{
new(Meai.ChatRole.User, "What's the weather in Paris? Respond with the temperature only."),
};
// First turn: get tool call
var response = await chatClient.GetResponseAsync(
(IEnumerable<Meai.ChatMessage>)messages, chatOptions);
var functionCall = response.Messages
.SelectMany(m => m.Contents)
.OfType<Meai.FunctionCallContent>()
.First();
// Execute tool and add result
var toolResult = await tool.InvokeAsync(
functionCall.Arguments is { } args
? new Meai.AIFunctionArguments(args)
: null);
messages.AddRange(response.Messages);
messages.Add(new Meai.ChatMessage(Meai.ChatRole.Tool,
new Meai.AIContent[]
{
new Meai.FunctionResultContent(functionCall.CallId, toolResult),
}));
// Second turn: get final response
var finalResponse = await chatClient.GetResponseAsync(
(IEnumerable<Meai.ChatMessage>)messages, chatOptions);
Console.WriteLine(finalResponse.Messages[0].Text);
MEAI Embeddings
Generate embeddings using the Microsoft.Extensions.AI IEmbeddingGenerator interface.
using var client = new OpenAiClient(apiKey);
// using Meai = Microsoft.Extensions.AI;
Meai.IEmbeddingGenerator<string, Meai.Embedding<float>> generator = client;
var result = await generator.GenerateAsync(
new List<string> { "Hello, world!" },
new Meai.EmbeddingGenerationOptions
{
ModelId = "text-embedding-3-small",
});
Console.WriteLine($"Embedding dimension: {result[0].Vector.Length}");
Support
Priority place for bugs: https://github.com/tryAGI/OpenAI/issues
Priority place for ideas and general questions: https://github.com/tryAGI/OpenAI/discussions
Discord: https://discord.gg/Ca2xhfBf3v
Acknowledgments
This project is supported by JetBrains through the Open Source Support Program.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- CSharpToJsonSchema (>= 3.10.1)
- Microsoft.Extensions.AI.Abstractions (>= 10.4.0)
- Tiktoken (>= 2.2.0)
NuGet packages (3)
Showing the top 3 NuGet packages that depend on tryAGI.OpenAI:
| Package | Downloads |
|---|---|
|
LangChain.Providers.OpenAI
OpenAI API LLM and Chat model provider. |
|
|
Anyscale
SDK for Anyscale Endpoint that makes it easy and cheap to use LLama 2 |
|
|
LangChain.Serve.OpenAI
LangChain Serve as OpenAI sdk compatible API |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on tryAGI.OpenAI:
| Repository | Stars |
|---|---|
|
tryAGI/LangChain
C# implementation of LangChain. We try to be as close to the original as possible in terms of abstractions, but are open to new entities.
|
| Version | Downloads | Last Updated |
|---|---|---|
| 4.2.1-dev.181 | 0 | 3/20/2026 |
| 4.2.1-dev.180 | 0 | 3/20/2026 |
| 4.2.1-dev.179 | 0 | 3/20/2026 |
| 4.2.1-dev.176 | 24 | 3/20/2026 |
| 4.2.1-dev.175 | 23 | 3/20/2026 |
| 4.2.1-dev.174 | 28 | 3/19/2026 |
| 4.2.1-dev.172 | 27 | 3/19/2026 |
| 4.2.1-dev.170 | 34 | 3/19/2026 |
| 4.2.1-dev.169 | 25 | 3/19/2026 |
| 4.2.1-dev.168 | 26 | 3/19/2026 |
| 4.2.1-dev.167 | 24 | 3/19/2026 |
| 4.2.1-dev.166 | 25 | 3/19/2026 |
| 4.2.1-dev.165 | 25 | 3/19/2026 |
| 4.2.1-dev.164 | 29 | 3/19/2026 |
| 4.2.1-dev.162 | 24 | 3/19/2026 |
| 4.2.1-dev.161 | 31 | 3/19/2026 |
| 4.2.1-dev.160 | 25 | 3/19/2026 |
| 4.2.1-dev.158 | 31 | 3/19/2026 |
| 4.2.1-dev.157 | 28 | 3/19/2026 |
| 4.2.0 | 50,248 | 2/11/2025 |