OpenAI-DotNet
8.0.0
See the version list below for details.
dotnet add package OpenAI-DotNet --version 8.0.0
NuGet\Install-Package OpenAI-DotNet -Version 8.0.0
<PackageReference Include="OpenAI-DotNet" Version="8.0.0" />
paket add OpenAI-DotNet --version 8.0.0
#r "nuget: OpenAI-DotNet, 8.0.0"
// Install OpenAI-DotNet as a Cake Addin #addin nuget:?package=OpenAI-DotNet&version=8.0.0 // Install OpenAI-DotNet as a Cake Tool #tool nuget:?package=OpenAI-DotNet&version=8.0.0
OpenAI-DotNet
A simple C# .NET client library for OpenAI to use though their RESTful API. Independently developed, this is not an official library and I am not affiliated with OpenAI. An OpenAI API account is required.
Forked from OpenAI-API-dotnet. More context on Roger Pincombe's blog.
This repository is available to transfer to the OpenAI organization if they so choose to accept it.
Requirements
- This library targets .NET 6.0 and above.
- It should work across console apps, winforms, wpf, asp.net, etc.
- It should also work across Windows, Linux, and Mac.
Getting started
Install from NuGet
Install package OpenAI-DotNet
from Nuget. Here's how via command line:
Install-Package OpenAI-DotNet
Looking to use OpenAI-DotNet in the Unity Game Engine? Check out our unity package on OpenUPM:
Documentation
Check out our new api docs!
https://rageagainstthepixel.github.io/OpenAI-DotNet 🆕
Table of Contents
- Authentication 🆕 ⚠️ 🚧
- OpenAIClient
- Azure OpenAI
- OpenAI API Proxy
- Models
- Assistants 🆕 ⚠️ 🚧
- List Assistants
- Create Assistant
- Retrieve Assistant
- Modify Assistant
- Delete Assistant
- Assistant Streaming 🆕
- Threads 🆕 ⚠️ 🚧
- Vector Stores 🆕
- Chat
- Audio
- Images ⚠️ 🚧
- Files
- Fine Tuning
- Batches 🆕
- Embeddings
- Moderations
Authentication
There are 3 ways to provide your API keys, in order of precedence:
[!WARNING] We recommended using the environment variables to load the API key instead of having it hard coded in your source. It is not recommended use this method in production, but only for accepting user credentials, local testing and quick start scenarios.
- Pass keys directly with constructor ⚠️
- Load key from configuration file
- Use System Environment Variables
You use the OpenAIAuthentication
when you initialize the API as shown:
Pass keys directly with constructor
[!WARNING] We recommended using the environment variables to load the API key instead of having it hard coded in your source. It is not recommended use this method in production, but only for accepting user credentials, local testing and quick start scenarios.
using var api = new OpenAIClient("sk-apiKey");
Or create a OpenAIAuthentication
object manually
using var api = new OpenAIClient(new OpenAIAuthentication("sk-apiKey", "org-yourOrganizationId", "proj_yourProjectId"));
Load key from configuration file
Attempts to load api keys from a configuration file, by default .openai
in the current directory, optionally traversing up the directory tree or in the user's home directory.
To create a configuration file, create a new text file named .openai
and containing the line:
[!NOTE] Organization and project id entries are optional.
Json format
{
"apiKey": "sk-aaaabbbbbccccddddd",
"organizationId": "org-yourOrganizationId",
"projectId": "proj_yourProjectId"
}
Deprecated format
OPENAI_API_KEY=sk-aaaabbbbbccccddddd
OPENAI_ORGANIZATION_ID=org-yourOrganizationId
OPENAI_PROJECT_ID=proj_yourProjectId
You can also load the configuration file directly with known path by calling static methods in OpenAIAuthentication
:
- Loads the default
.openai
config in the specified directory:
using var api = new OpenAIClient(OpenAIAuthentication.LoadFromDirectory("path/to/your/directory"));
- Loads the configuration file from a specific path. File does not need to be named
.openai
as long as it conforms to the json format:
using var api = new OpenAIClient(OpenAIAuthentication.LoadFromPath("path/to/your/file.json"));
Use System Environment Variables
Use your system's environment variables specify an api key and organization to use.
- Use
OPENAI_API_KEY
for your api key. - Use
OPENAI_ORGANIZATION_ID
to specify an organization. - Use
OPENAI_PROJECT_ID
to specify a project.
using var api = new OpenAIClient(OpenAIAuthentication.LoadFromEnv());
Handling OpenAIClient and HttpClient Lifecycle
OpenAIClient
implements IDisposable
to manage the lifecycle of the resources it uses, including HttpClient
. When you initialize OpenAIClient
, it will create an internal HttpClient
instance if one is not provided. This internal HttpClient
is disposed of when OpenAIClient
is disposed of. If you provide an external HttpClient
instance to OpenAIClient
, you are responsible for managing its disposal.
- If
OpenAIClient
creates its ownHttpClient
, it will also take care of disposing it when you disposeOpenAIClient
. - If an external
HttpClient
is passed toOpenAIClient
, it will not be disposed of byOpenAIClient
. You must manage the disposal of theHttpClient
yourself.
Please ensure to appropriately dispose of OpenAIClient
to release resources timely and to prevent any potential memory or resource leaks in your application.
Typical usage with an internal HttpClient
:
using var api = new OpenAIClient();
Custom HttpClient
(which you must dispose of yourself):
using var customHttpClient = new HttpClient();
// set custom http client properties here
var api = new OpenAIClient(client: customHttpClient);
Azure OpenAI
You can also choose to use Microsoft's Azure OpenAI deployments as well.
You can find the required information in the Azure Playground by clicking the View Code
button and view a URL like this:
https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
your-resource-name
The name of your Azure OpenAI Resource.deployment-id
The deployment name you chose when you deployed the model.api-version
The API version to use for this operation. This follows the YYYY-MM-DD format.
To setup the client to use your deployment, you'll need to pass in OpenAIClientSettings
into the client constructor.
var auth = new OpenAIAuthentication("sk-apiKey");
var settings = new OpenAIClientSettings(resourceName: "your-resource-name", deploymentId: "deployment-id", apiVersion: "api-version");
using var api = new OpenAIClient(auth, settings);
Azure Active Directory Authentication
Authenticate with MSAL as usual and get access token, then use the access token when creating your OpenAIAuthentication
. Then be sure to set useAzureActiveDirectory to true when creating your OpenAIClientSettings
.
Tutorial: Desktop app that calls web APIs: Acquire a token
// get your access token using any of the MSAL methods
var accessToken = result.AccessToken;
var auth = new OpenAIAuthentication(accessToken);
var settings = new OpenAIClientSettings(resourceName: "your-resource", deploymentId: "deployment-id", apiVersion: "api-version", useActiveDirectoryAuthentication: true);
using var api = new OpenAIClient(auth, settings);
OpenAI API Proxy
Using either the OpenAI-DotNet or com.openai.unity packages directly in your front-end app may expose your API keys and other sensitive information. To mitigate this risk, it is recommended to set up an intermediate API that makes requests to OpenAI on behalf of your front-end app. This library can be utilized for both front-end and intermediary host configurations, ensuring secure communication with the OpenAI API.
Front End Example
In the front end example, you will need to securely authenticate your users using your preferred OAuth provider. Once the user is authenticated, exchange your custom auth token with your API key on the backend.
Follow these steps:
- Setup a new project using either the OpenAI-DotNet or com.openai.unity packages.
- Authenticate users with your OAuth provider.
- After successful authentication, create a new
OpenAIAuthentication
object and pass in the custom token with the prefixsess-
. - Create a new
OpenAIClientSettings
object and specify the domain where your intermediate API is located. - Pass your new
auth
andsettings
objects to theOpenAIClient
constructor when you create the client instance.
Here's an example of how to set up the front end:
var authToken = await LoginAsync();
var auth = new OpenAIAuthentication($"sess-{authToken}");
var settings = new OpenAIClientSettings(domain: "api.your-custom-domain.com");
using var api = new OpenAIClient(auth, settings);
This setup allows your front end application to securely communicate with your backend that will be using the OpenAI-DotNet-Proxy, which then forwards requests to the OpenAI API. This ensures that your OpenAI API keys and other sensitive information remain secure throughout the process.
Back End Example
In this example, we demonstrate how to set up and use OpenAIProxyStartup
in a new ASP.NET Core web app. The proxy server will handle authentication and forward requests to the OpenAI API, ensuring that your API keys and other sensitive information remain secure.
- Create a new ASP.NET Core minimal web API project.
- Add the OpenAI-DotNet nuget package to your project.
- Powershell install:
Install-Package OpenAI-DotNet-Proxy
- Manually editing .csproj:
<PackageReference Include="OpenAI-DotNet-Proxy" />
- Powershell install:
- Create a new class that inherits from
AbstractAuthenticationFilter
and override theValidateAuthentication
method. This will implement theIAuthenticationFilter
that you will use to check user session token against your internal server. - In
Program.cs
, create a new proxy web application by callingOpenAIProxyStartup.CreateWebApplication
method, passing your customAuthenticationFilter
as a type argument. - Create
OpenAIAuthentication
andOpenAIClientSettings
as you would normally with your API keys, org id, or Azure settings.
public partial class Program
{
private class AuthenticationFilter : AbstractAuthenticationFilter
{
public override void ValidateAuthentication(IHeaderDictionary request)
{
// You will need to implement your own class to properly test
// custom issued tokens you've setup for your end users.
if (!request.Authorization.ToString().Contains(TestUserToken))
{
throw new AuthenticationException("User is not authorized");
}
}
public override async Task ValidateAuthenticationAsync(IHeaderDictionary request)
{
await Task.CompletedTask; // remote resource call
// You will need to implement your own class to properly test
// custom issued tokens you've setup for your end users.
if (!request.Authorization.ToString().Contains(TestUserToken))
{
throw new AuthenticationException("User is not authorized");
}
}
}
public static void Main(string[] args)
{
var auth = OpenAIAuthentication.LoadFromEnv();
var settings = new OpenAIClientSettings(/* your custom settings if using Azure OpenAI */);
using var openAIClient = new OpenAIClient(auth, settings);
OpenAIProxyStartup.CreateWebApplication<AuthenticationFilter>(args, openAIClient).Run();
}
}
Once you have set up your proxy server, your end users can now make authenticated requests to your proxy api instead of directly to the OpenAI API. The proxy server will handle authentication and forward requests to the OpenAI API, ensuring that your API keys and other sensitive information remain secure.
Models
List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
Also checkout model endpoint compatibility to understand which models work with which endpoints.
To specify a custom model not pre-defined in this library:
var model = new Model("model-id");
The Models API is accessed via OpenAIClient.ModelsEndpoint
List models
Lists the currently available models, and provides basic information about each one such as the owner and availability.
using var api = new OpenAIClient();
var models = await api.ModelsEndpoint.GetModelsAsync();
foreach (var model in models)
{
Console.WriteLine(model.ToString());
}
Retrieve model
Retrieves a model instance, providing basic information about the model such as the owner and permissions.
using var api = new OpenAIClient();
var model = await api.ModelsEndpoint.GetModelDetailsAsync("gpt-4o");
Console.WriteLine(model.ToString());
Delete Fine Tuned Model
Delete a fine-tuned model. You must have the Owner role in your organization.
using var api = new OpenAIClient();
var isDeleted = await api.ModelsEndpoint.DeleteFineTuneModelAsync("your-fine-tuned-model");
Assert.IsTrue(isDeleted);
Assistants
[!WARNING] Beta Feature. API subject to breaking changes.
Build assistants that can call models and use tools to perform tasks.
The Assistants API is accessed via OpenAIClient.AssistantsEndpoint
List Assistants
Returns a list of assistants.
using var api = new OpenAIClient();
var assistantsList = await api.AssistantsEndpoint.ListAssistantsAsync();
foreach (var assistant in assistantsList.Items)
{
Console.WriteLine($"{assistant} -> {assistant.CreatedAt}");
}
Create Assistant
Create an assistant with a model and instructions.
using var api = new OpenAIClient();
var request = new CreateAssistantRequest(Model.GPT4o);
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(request);
Retrieve Assistant
Retrieves an assistant.
using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.RetrieveAssistantAsync("assistant-id");
Console.WriteLine($"{assistant} -> {assistant.CreatedAt}");
Modify Assistant
Modifies an assistant.
using var api = new OpenAIClient();
var createRequest = new CreateAssistantRequest(Model.GPT4_Turbo);
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(createRequest);
var modifyRequest = new CreateAssistantRequest(Model.GPT4o);
var modifiedAssistant = await api.AssistantsEndpoint.ModifyAssistantAsync(assistant.Id, modifyRequest);
// OR AssistantExtension for easier use!
var modifiedAssistantEx = await assistant.ModifyAsync(modifyRequest);
Delete Assistant
Delete an assistant.
using var api = new OpenAIClient();
var isDeleted = await api.AssistantsEndpoint.DeleteAssistantAsync("assistant-id");
// OR AssistantExtension for easier use!
var isDeleted = await assistant.DeleteAsync();
Assert.IsTrue(isDeleted);
Assistant Streaming
[!NOTE] Assistant stream events can be easily added to existing thread calls by passing
Action<IServerSentEvent> streamEventHandler
callback to any existing method that supports streaming.
Threads
Create Threads that Assistants can interact with.
The Threads API is accessed via OpenAIClient.ThreadsEndpoint
Create Thread
Create a thread.
using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
Console.WriteLine($"Create thread {thread.Id} -> {thread.CreatedAt}");
Create Thread and Run
Create a thread and run it in one request.
See also: Thread Runs
using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(
new CreateAssistantRequest(
name: "Math Tutor",
instructions: "You are a personal math tutor. Answer questions briefly, in a sentence or less.",
model: Model.GPT4o));
var messages = new List<Message> { "I need to solve the equation `3x + 11 = 14`. Can you help me?" };
var threadRequest = new CreateThreadRequest(messages);
var run = await assistant.CreateThreadAndRunAsync(threadRequest);
Console.WriteLine($"Created thread and run: {run.ThreadId} -> {run.Id} -> {run.CreatedAt}");
Create Thread and Run Streaming
Create a thread and run it in one request while streaming events.
using var api = new OpenAIClient();
var tools = new List<Tool>
{
Tool.GetOrCreateTool(typeof(WeatherService), nameof(WeatherService.GetCurrentWeatherAsync))
};
var assistantRequest = new CreateAssistantRequest(tools: tools, instructions: "You are a helpful weather assistant. Use the appropriate unit based on geographical location.");
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(assistantRequest);
ThreadResponse thread = null;
async void StreamEventHandler(IServerSentEvent streamEvent)
{
switch (streamEvent)
{
case ThreadResponse threadResponse:
thread = threadResponse;
break;
case RunResponse runResponse:
if (runResponse.Status == RunStatus.RequiresAction)
{
var toolOutputs = await assistant.GetToolOutputsAsync(runResponse);
foreach (var toolOutput in toolOutputs)
{
Console.WriteLine($"Tool Output: {toolOutput}");
}
await runResponse.SubmitToolOutputsAsync(toolOutputs, StreamEventHandler);
}
break;
default:
Console.WriteLine(streamEvent.ToJsonString());
break;
}
}
var run = await assistant.CreateThreadAndRunAsync("I'm in Kuala-Lumpur, please tell me what's the temperature now?", StreamEventHandler);
run = await run.WaitForStatusChangeAsync();
var messages = await thread.ListMessagesAsync();
foreach (var response in messages.Items.Reverse())
{
Console.WriteLine($"{response.Role}: {response.PrintContent()}");
}
Retrieve Thread
Retrieves a thread.
using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.RetrieveThreadAsync("thread-id");
// OR if you simply wish to get the latest state of a thread
thread = await thread.UpdateAsync();
Console.WriteLine($"Retrieve thread {thread.Id} -> {thread.CreatedAt}");
Modify Thread
Modifies a thread.
Note: Only the metadata can be modified.
using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var metadata = new Dictionary<string, string>
{
{ "key", "custom thread metadata" }
}
thread = await api.ThreadsEndpoint.ModifyThreadAsync(thread.Id, metadata);
// OR use extension method for convenience!
thread = await thread.ModifyAsync(metadata);
Console.WriteLine($"Modify thread {thread.Id} -> {thread.Metadata["key"]}");
Delete Thread
Delete a thread.
using var api = new OpenAIClient();
var isDeleted = await api.ThreadsEndpoint.DeleteThreadAsync("thread-id");
// OR use extension method for convenience!
var isDeleted = await thread.DeleteAsync();
Assert.IsTrue(isDeleted);
Thread Messages
Create messages within threads.
List Thread Messages
Returns a list of messages for a given thread.
using var api = new OpenAIClient();
var messageList = await api.ThreadsEndpoint.ListMessagesAsync("thread-id");
// OR use extension method for convenience!
var messageList = await thread.ListMessagesAsync();
foreach (var message in messageList.Items)
{
Console.WriteLine($"{message.Id}: {message.Role}: {message.PrintContent()}");
}
Create Thread Message
Create a message.
using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var request = new CreateMessageRequest("Hello world!");
var message = await api.ThreadsEndpoint.CreateMessageAsync(thread.Id, request);
// OR use extension method for convenience!
var message = await thread.CreateMessageAsync("Hello World!");
Console.WriteLine($"{message.Id}: {message.Role}: {message.PrintContent()}");
Retrieve Thread Message
Retrieve a message.
using var api = new OpenAIClient();
var message = await api.ThreadsEndpoint.RetrieveMessageAsync("thread-id", "message-id");
// OR use extension methods for convenience!
var message = await thread.RetrieveMessageAsync("message-id");
var message = await message.UpdateAsync();
Console.WriteLine($"{message.Id}: {message.Role}: {message.PrintContent()}");
Modify Thread Message
Modify a message.
Note: Only the message metadata can be modified.
using var api = new OpenAIClient();
var metadata = new Dictionary<string, string>
{
{ "key", "custom message metadata" }
};
var message = await api.ThreadsEndpoint.ModifyMessageAsync("thread-id", "message-id", metadata);
// OR use extension method for convenience!
var message = await message.ModifyAsync(metadata);
Console.WriteLine($"Modify message metadata: {message.Id} -> {message.Metadata["key"]}");
Thread Runs
Represents an execution run on a thread.
List Thread Runs
Returns a list of runs belonging to a thread.
using var api = new OpenAIClient();
var runList = await api.ThreadsEndpoint.ListRunsAsync("thread-id");
// OR use extension method for convenience!
var runList = await thread.ListRunsAsync();
foreach (var run in runList.Items)
{
Console.WriteLine($"[{run.Id}] {run.Status} | {run.CreatedAt}");
}
Create Thread Run
Create a run.
using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(
new CreateAssistantRequest(
name: "Math Tutor",
instructions: "You are a personal math tutor. Answer questions briefly, in a sentence or less.",
model: Model.GPT4o));
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var message = await thread.CreateMessageAsync("I need to solve the equation `3x + 11 = 14`. Can you help me?");
var run = await thread.CreateRunAsync(assistant);
Console.WriteLine($"[{run.Id}] {run.Status} | {run.CreatedAt}");
Create Thread Run Streaming
Create a run and stream the events.
using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(
new CreateAssistantRequest(
name: "Math Tutor",
instructions: "You are a personal math tutor. Answer questions briefly, in a sentence or less. Your responses should be formatted in JSON.",
model: Model.GPT4o,
responseFormat: ChatResponseFormat.Json));
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var message = await thread.CreateMessageAsync("I need to solve the equation `3x + 11 = 14`. Can you help me?");
var run = await thread.CreateRunAsync(assistant, streamEvent =>
{
Console.WriteLine(streamEvent.ToJsonString());
});
var messages = await thread.ListMessagesAsync();
foreach (var response in messages.Items.Reverse())
{
Console.WriteLine($"{response.Role}: {response.PrintContent()}");
}
Retrieve Thread Run
Retrieves a run.
using var api = new OpenAIClient();
var run = await api.ThreadsEndpoint.RetrieveRunAsync("thread-id", "run-id");
// OR use extension method for convenience!
var run = await thread.RetrieveRunAsync("run-id");
var run = await run.UpdateAsync();
Console.WriteLine($"[{run.Id}] {run.Status} | {run.CreatedAt}");
Modify Thread Run
Modifies a run.
Note: Only the metadata can be modified.
using var api = new OpenAIClient();
var metadata = new Dictionary<string, string>
{
{ "key", "custom run metadata" }
};
var run = await api.ThreadsEndpoint.ModifyRunAsync("thread-id", "run-id", metadata);
// OR use extension method for convenience!
var run = await run.ModifyAsync(metadata);
Console.WriteLine($"Modify run {run.Id} -> {run.Metadata["key"]}");
Thread Submit Tool Outputs to Run
When a run has the status: requires_action
and required_action.type
is submit_tool_outputs
, this endpoint can be used to submit the outputs from the tool calls once they're all completed.
All outputs must be submitted in a single request.
[!NOTE] See Create Thread and Run Streaming example on how to stream tool output events.
using var api = new OpenAIClient();
var tools = new List<Tool>
{
// Use a predefined tool
Tool.Retrieval, Tool.CodeInterpreter,
// Or create a tool from a type and the name of the method you want to use for function calling
Tool.GetOrCreateTool(typeof(WeatherService), nameof(WeatherService.GetCurrentWeatherAsync)),
// Pass in an instance of an object to call a method on it
Tool.GetOrCreateTool(api.ImagesEndPoint, nameof(ImagesEndpoint.GenerateImageAsync)),
// Define func<,> callbacks
Tool.FromFunc("name_of_func", () => { /* callback function */ }),
Tool.FromFunc<T1,T2,TResult>("func_with_multiple_params", (t1, t2) => { /* logic that calculates return value */ return tResult; })
};
var assistantRequest = new CreateAssistantRequest(tools: tools, instructions: "You are a helpful weather assistant. Use the appropriate unit based on geographical location.");
var testAssistant = await api.AssistantsEndpoint.CreateAssistantAsync(assistantRequest);
var run = await testAssistant.CreateThreadAndRunAsync("I'm in Kuala-Lumpur, please tell me what's the temperature now?");
// waiting while run is Queued and InProgress
run = await run.WaitForStatusChangeAsync();
// Invoke all of the tool call functions and return the tool outputs.
var toolOutputs = await testAssistant.GetToolOutputsAsync(run.RequiredAction.SubmitToolOutputs.ToolCalls);
foreach (var toolOutput in toolOutputs)
{
Console.WriteLine($"tool call output: {toolOutput.Output}");
}
// submit the tool outputs
run = await run.SubmitToolOutputsAsync(toolOutputs);
// waiting while run in Queued and InProgress
run = await run.WaitForStatusChangeAsync();
var messages = await run.ListMessagesAsync();
foreach (var message in messages.Items.OrderBy(response => response.CreatedAt))
{
Console.WriteLine($"{message.Role}: {message.PrintContent()}");
}
List Thread Run Steps
Returns a list of run steps belonging to a run.
using var api = new OpenAIClient();
var runStepList = await api.ThreadsEndpoint.ListRunStepsAsync("thread-id", "run-id");
// OR use extension method for convenience!
var runStepList = await run.ListRunStepsAsync();
foreach (var runStep in runStepList.Items)
{
Console.WriteLine($"[{runStep.Id}] {runStep.Status} {runStep.CreatedAt} -> {runStep.ExpiresAt}");
}
Retrieve Thread Run Step
Retrieves a run step.
using var api = new OpenAIClient();
var runStep = await api.ThreadsEndpoint.RetrieveRunStepAsync("thread-id", "run-id", "step-id");
// OR use extension method for convenience!
var runStep = await run.RetrieveRunStepAsync("step-id");
var runStep = await runStep.UpdateAsync();
Console.WriteLine($"[{runStep.Id}] {runStep.Status} {runStep.CreatedAt} -> {runStep.ExpiresAt}");
Cancel Thread Run
Cancels a run that is in_progress
.
using var api = new OpenAIClient();
var isCancelled = await api.ThreadsEndpoint.CancelRunAsync("thread-id", "run-id");
// OR use extension method for convenience!
var isCancelled = await run.CancelAsync();
Assert.IsTrue(isCancelled);
Vector Stores
Vector stores are used to store files for use by the file_search
tool.
The Vector Stores API is accessed via OpenAIClient.VectorStoresEndpoint
List Vector Stores
Returns a list of vector stores.
using var api = new OpenAIClient();
var vectorStores = await OpenAIClient.VectorStoresEndpoint.ListVectorStoresAsync();
foreach (var vectorStore in vectorStores.Items)
{
Console.WriteLine(vectorStore);
}
Create Vector Store
Create a vector store.
using var api = new OpenAIClient();
var createVectorStoreRequest = new CreateVectorStoreRequest("test-vector-store");
var vectorStore = await api.VectorStoresEndpoint.CreateVectorStoreAsync(createVectorStoreRequest);
Console.WriteLine(vectorStore);
Retrieve Vector Store
Retrieves a vector store.
using var api = new OpenAIClient();
var vectorStore = await api.VectorStoresEndpoint.GetVectorStoreAsync("vector-store-id");
Console.WriteLine(vectorStore);
Modify Vector Store
Modifies a vector store.
using var api = new OpenAIClient();
var metadata = new Dictionary<string, object> { { "Test", DateTime.UtcNow } };
var vectorStore = await api.VectorStoresEndpoint.ModifyVectorStoreAsync("vector-store-id", metadata: metadata);
Console.WriteLine(vectorStore);
Delete Vector Store
Delete a vector store.
using var api = new OpenAIClient();
var isDeleted = await api.VectorStoresEndpoint.DeleteVectorStoreAsync("vector-store-id");
Assert.IsTrue(isDeleted);
Vector Store Files
Vector store files represent files inside a vector store.
List Vector Store Files
Returns a list of vector store files.
using var api = new OpenAIClient();
var files = await api.VectorStoresEndpoint.ListVectorStoreFilesAsync("vector-store-id");
foreach (var file in vectorStoreFiles.Items)
{
Console.WriteLine(file);
}
Create Vector Store File
Create a vector store file by attaching a file to a vector store.
using var api = new OpenAIClient();
var file = await api.VectorStoresEndpoint.CreateVectorStoreFileAsync("vector-store-id", "file-id", new ChunkingStrategy(ChunkingStrategyType.Static));
Console.WriteLine(file);
Retrieve Vector Store File
Retrieves a vector store file.
using var api = new OpenAIClient();
var file = await api.VectorStoresEndpoint.GetVectorStoreFileAsync("vector-store-id", "vector-store-file-id");
Console.WriteLine(file);
Delete Vector Store File
Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint.
using var api = new OpenAIClient();
var isDeleted = await api.VectorStoresEndpoint.DeleteVectorStoreFileAsync("vector-store-id", vectorStoreFile);
Assert.IsTrue(isDeleted);
Vector Store File Batches
Vector store files represent files inside a vector store.
Create Vector Store File Batch
Create a vector store file batch.
using var api = new OpenAIClient();
var files = new List<string> { "file_id_1","file_id_2" };
var vectorStoreFileBatch = await api.VectorStoresEndpoint.CreateVectorStoreFileBatchAsync("vector-store-id", files);
Console.WriteLine(vectorStoreFileBatch);
Retrieve Vector Store File Batch
Retrieves a vector store file batch.
using var api = new OpenAIClient();
var vectorStoreFileBatch = await api.VectorStoresEndpoint.GetVectorStoreFileBatchAsync("vector-store-id", "vector-store-file-batch-id");
// you can also use convenience methods!
vectorStoreFileBatch = await vectorStoreFileBatch.UpdateAsync();
vectorStoreFileBatch = await vectorStoreFileBatch.WaitForStatusChangeAsync();
List Files In Vector Store Batch
Returns a list of vector store files in a batch.
using var api = new OpenAIClient();
var files = await api.VectorStoresEndpoint.ListVectorStoreBatchFilesAsync("vector-store-id", "vector-store-file-batch-id");
foreach (var file in files.Items)
{
Console.WriteLine(file);
}
Cancel Vector Store File Batch
Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible.
using var api = new OpenAIClient();
var isCancelled = await api.VectorStoresEndpoint.CancelVectorStoreFileBatchAsync("vector-store-id", "vector-store-file-batch-id");
Chat
Given a chat conversation, the model will return a chat completion response.
The Chat API is accessed via OpenAIClient.ChatEndpoint
Chat Completions
Creates a completion for the chat message
using var api = new OpenAIClient();
var messages = new List<Message>
{
new Message(Role.System, "You are a helpful assistant."),
new Message(Role.User, "Who won the world series in 2020?"),
new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
new Message(Role.User, "Where was it played?"),
};
var chatRequest = new ChatRequest(messages, Model.GPT4o);
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
var choice = response.FirstChoice;
Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message} | Finish Reason: {choice.FinishReason}");
Chat Streaming
using var api = new OpenAIClient();
var messages = new List<Message>
{
new Message(Role.System, "You are a helpful assistant."),
new Message(Role.User, "Who won the world series in 2020?"),
new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
new Message(Role.User, "Where was it played?"),
};
var chatRequest = new ChatRequest(messages);
var response = await api.ChatEndpoint.StreamCompletionAsync(chatRequest, partialResponse =>
{
Console.Write(partialResponse.FirstChoice.Delta.ToString());
});
var choice = response.FirstChoice;
Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message} | Finish Reason: {choice.FinishReason}");
Or if using IAsyncEnumerable{T}
(C# 8.0+)
using var api = new OpenAIClient();
var messages = new List<Message>
{
new Message(Role.System, "You are a helpful assistant."),
new Message(Role.User, "Who won the world series in 2020?"),
new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
new Message(Role.User, "Where was it played?"),
};
var cumulativeDelta = string.Empty;
var chatRequest = new ChatRequest(messages);
await foreach (var partialResponse in OpenAIClient.ChatEndpoint.StreamCompletionEnumerableAsync(chatRequest))
{
foreach (var choice in partialResponse.Choices.Where(choice => choice.Delta?.Content != null))
{
cumulativeDelta += choice.Delta.Content;
}
}
Console.WriteLine(cumulativeDelta);
Chat Tools
using var api = new OpenAIClient();
var messages = new List<Message>
{
new(Role.System, "You are a helpful weather assistant. Always prompt the user for their location."),
new Message(Role.User, "What's the weather like today?"),
};
foreach (var message in messages)
{
Console.WriteLine($"{message.Role}: {message}");
}
// Define the tools that the assistant is able to use:
// 1. Get a list of all the static methods decorated with FunctionAttribute
var tools = Tool.GetAllAvailableTools(includeDefaults: false, forceUpdate: true, clearCache: true);
// 2. Define a custom list of tools:
var tools = new List<Tool>
{
Tool.GetOrCreateTool(objectInstance, "TheNameOfTheMethodToCall"),
Tool.FromFunc("a_custom_name_for_your_function", ()=> { /* Some logic to run */ })
};
var chatRequest = new ChatRequest(messages, tools: tools, toolChoice: "auto");
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
messages.Add(response.FirstChoice.Message);
Console.WriteLine($"{response.FirstChoice.Message.Role}: {response.FirstChoice} | Finish Reason: {response.FirstChoice.FinishReason}");
var locationMessage = new Message(Role.User, "I'm in Glasgow, Scotland");
messages.Add(locationMessage);
Console.WriteLine($"{locationMessage.Role}: {locationMessage.Content}");
chatRequest = new ChatRequest(messages, tools: tools, toolChoice: "auto");
response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
messages.Add(response.FirstChoice.Message);
if (response.FirstChoice.FinishReason == "stop")
{
Console.WriteLine($"{response.FirstChoice.Message.Role}: {response.FirstChoice} | Finish Reason: {response.FirstChoice.FinishReason}");
var unitMessage = new Message(Role.User, "Fahrenheit");
messages.Add(unitMessage);
Console.WriteLine($"{unitMessage.Role}: {unitMessage.Content}");
chatRequest = new ChatRequest(messages, tools: tools, toolChoice: "auto");
response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
}
// iterate over all tool calls and invoke them
foreach (var toolCall in response.FirstChoice.Message.ToolCalls)
{
Console.WriteLine($"{response.FirstChoice.Message.Role}: {toolCall.Function.Name} | Finish Reason: {response.FirstChoice.FinishReason}");
Console.WriteLine($"{toolCall.Function.Arguments}");
// Invokes function to get a generic json result to return for tool call.
var functionResult = await toolCall.InvokeFunctionAsync();
// If you know the return type and do additional processing you can use generic overload
var functionResult = await toolCall.InvokeFunctionAsync<string>();
messages.Add(new Message(toolCall, functionResult));
Console.WriteLine($"{Role.Tool}: {functionResult}");
}
// System: You are a helpful weather assistant.
// User: What's the weather like today?
// Assistant: Sure, may I know your current location? | Finish Reason: stop
// User: I'm in Glasgow, Scotland
// Assistant: GetCurrentWeather | Finish Reason: tool_calls
// {
// "location": "Glasgow, Scotland",
// "unit": "celsius"
// }
// Tool: The current weather in Glasgow, Scotland is 39°C.
Chat Vision
[!WARNING] Beta Feature. API subject to breaking changes.
using var api = new OpenAIClient();
var messages = new List<Message>
{
new Message(Role.System, "You are a helpful assistant."),
new Message(Role.User, new List<Content>
{
"What's in this image?",
new ImageUrl("https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", ImageDetail.Low)
})
};
var chatRequest = new ChatRequest(messages, model: Model.GPT4o);
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
Console.WriteLine($"{response.FirstChoice.Message.Role}: {response.FirstChoice.Message.Content} | Finish Reason: {response.FirstChoice.FinishDetails}");
Chat Json Mode
[!WARNING] Beta Feature. API subject to breaking changes.
[!IMPORTANT]
- When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context.
- The JSON in the message the model returns may be partial (i.e. cut off) if
finish_reason
is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, checkfinish_reason
before parsing the response.- JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors.
var messages = new List<Message>
{
new Message(Role.System, "You are a helpful assistant designed to output JSON."),
new Message(Role.User, "Who won the world series in 2020?"),
};
var chatRequest = new ChatRequest(messages, Model.GPT4o, responseFormat: ChatResponseFormat.Json);
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
foreach (var choice in response.Choices)
{
Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice} | Finish Reason: {choice.FinishReason}");
}
response.GetUsage();
Audio
Converts audio into text.
The Audio API is accessed via OpenAIClient.AudioEndpoint
Create Speech
Generates audio from the input text.
using var api = new OpenAIClient();
var request = new SpeechRequest("Hello World!");
async Task ChunkCallback(ReadOnlyMemory<byte> chunkCallback)
{
// TODO Implement audio playback as chunks arrive
await Task.CompletedTask;
}
var response = await api.AudioEndpoint.CreateSpeechAsync(request, ChunkCallback);
await File.WriteAllBytesAsync("../../../Assets/HelloWorld.mp3", response.ToArray());
Create Transcription
Transcribes audio into the input language.
using var api = new OpenAIClient();
using var request = new AudioTranscriptionRequest(Path.GetFullPath(audioAssetPath), language: "en");
var response = await api.AudioEndpoint.CreateTranscriptionTextAsync(request);
Console.WriteLine(response);
You can also get detailed information using verbose_json
to get timestamp granularities:
using var api = new OpenAIClient();
using var request = new AudioTranscriptionRequest(transcriptionAudio, responseFormat: AudioResponseFormat.Verbose_Json, timestampGranularity: TimestampGranularity.Word, temperature: 0.1f, language: "en");
var response = await api.AudioEndpoint.CreateTranscriptionTextAsync(request);
foreach (var word in response.Words)
{
Console.WriteLine($"[{word.Start}-{word.End}] \"{word.Word}\"");
}
Create Translation
Translates audio into into English.
using var api = new OpenAIClient();
using var request = new AudioTranslationRequest(Path.GetFullPath(audioAssetPath));
var response = await api.AudioEndpoint.CreateTranslationTextAsync(request);
Console.WriteLine(response);
Images
Given a prompt and/or an input image, the model will generate a new image.
The Images API is accessed via OpenAIClient.ImagesEndpoint
Create Image
Creates an image given a prompt.
using var api = new OpenAIClient();
var request = new ImageGenerationRequest("A house riding a velociraptor", Models.Model.DallE_3);
var imageResults = await api.ImagesEndPoint.GenerateImageAsync(request);
foreach (var image in imageResults)
{
Console.WriteLine(image);
// image == url or b64_string
}
Edit Image
Creates an edited or extended image given an original image and a prompt.
using var api = new OpenAIClient();
var request = new ImageEditRequest(imageAssetPath, maskAssetPath, "A sunlit indoor lounge area with a pool containing a flamingo", size: ImageSize.Small);
var imageResults = await api.ImagesEndPoint.CreateImageEditAsync(request);
foreach (var image in imageResults)
{
Console.WriteLine(image);
// image == url or b64_string
}
Create Image Variation
Creates a variation of a given image.
using var api = new OpenAIClient();
var request = new ImageVariationRequest(imageAssetPath, size: ImageSize.Small);
var imageResults = await api.ImagesEndPoint.CreateImageVariationAsync(request);
foreach (var image in imageResults)
{
Console.WriteLine(image);
// image == url or b64_string
}
Files
Files are used to upload documents that can be used with features like Fine-tuning.
The Files API is accessed via OpenAIClient.FilesEndpoint
List Files
Returns a list of files that belong to the user's organization.
using var api = new OpenAIClient();
var fileList = await api.FilesEndpoint.ListFilesAsync();
foreach (var file in fileList)
{
Console.WriteLine($"{file.Id} -> {file.Object}: {file.FileName} | {file.Size} bytes");
}
Upload File
Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB.
The size of individual files can be a maximum of 512 MB. See the Assistants Tools guide to learn more about the types of files supported. The Fine-tuning API only supports .jsonl files.
using var api = new OpenAIClient();
var file = await api.FilesEndpoint.UploadFileAsync("path/to/your/file.jsonl", FilePurpose.FineTune);
Console.WriteLine(file.Id);
Delete File
Delete a file.
using var api = new OpenAIClient();
var isDeleted = await api.FilesEndpoint.DeleteFileAsync(fileId);
Assert.IsTrue(isDeleted);
Retrieve File Info
Returns information about a specific file.
using var api = new OpenAIClient();
var file = await api.FilesEndpoint.GetFileInfoAsync(fileId);
Console.WriteLine($"{file.Id} -> {file.Object}: {file.FileName} | {file.Size} bytes");
Download File Content
Downloads the file content to the specified directory.
using var api = new OpenAIClient();
var downloadedFilePath = await api.FilesEndpoint.DownloadFileAsync(fileId, "path/to/your/save/directory");
Console.WriteLine(downloadedFilePath);
Assert.IsTrue(File.Exists(downloadedFilePath));
Fine Tuning
Manage fine-tuning jobs to tailor a model to your specific training data.
Related guide: Fine-tune models
The Files API is accessed via OpenAIClient.FineTuningEndpoint
Create Fine Tune Job
Creates a job that fine-tunes a specified model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
using var api = new OpenAIClient();
var fileId = "file-abc123";
var request = new CreateFineTuneRequest(fileId);
var job = await api.FineTuningEndpoint.CreateJobAsync(Model.GPT3_5_Turbo, request);
Console.WriteLine($"Started {job.Id} | Status: {job.Status}");
List Fine Tune Jobs
List your organization's fine-tuning jobs.
using var api = new OpenAIClient();
var jobList = await api.FineTuningEndpoint.ListJobsAsync();
foreach (var job in jobList.Items.OrderByDescending(job => job.CreatedAt))
{
Console.WriteLine($"{job.Id} -> {job.CreatedAt} | {job.Status}");
}
Retrieve Fine Tune Job Info
Gets info about the fine-tune job.
using var api = new OpenAIClient();
var job = await api.FineTuningEndpoint.GetJobInfoAsync(fineTuneJob);
Console.WriteLine($"{job.Id} -> {job.CreatedAt} | {job.Status}");
Cancel Fine Tune Job
Immediately cancel a fine-tune job.
using var api = new OpenAIClient();
var isCancelled = await api.FineTuningEndpoint.CancelFineTuneJobAsync(fineTuneJob);
Assert.IsTrue(isCancelled);
List Fine Tune Job Events
Get status updates for a fine-tuning job.
using var api = new OpenAIClient();
var eventList = await api.FineTuningEndpoint.ListJobEventsAsync(fineTuneJob);
Console.WriteLine($"{fineTuneJob.Id} -> status: {fineTuneJob.Status} | event count: {eventList.Events.Count}");
foreach (var @event in eventList.Items.OrderByDescending(@event => @event.CreatedAt))
{
Console.WriteLine($" {@event.CreatedAt} [{@event.Level}] {@event.Message}");
}
Batches
Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.
The Batches API is accessed via OpenAIClient.BatchesEndpoint
List Batches
List your organization's batches.
using var api = new OpenAIClient();
var batches = await api.await OpenAIClient.BatchEndpoint.ListBatchesAsync();
foreach (var batch in listResponse.Items)
{
Console.WriteLine(batch);
}
Create Batch
Creates and executes a batch from an uploaded file of requests
using var api = new OpenAIClient();
var batchRequest = new CreateBatchRequest("file-id", Endpoint.ChatCompletions);
var batch = await api.BatchEndpoint.CreateBatchAsync(batchRequest);
Retrieve Batch
Retrieves a batch.
using var api = new OpenAIClient();
var batch = await api.BatchEndpoint.RetrieveBatchAsync("batch-id");
// you can also use convenience methods!
batch = await batch.UpdateAsync();
batch = await batch.WaitForStatusChangeAsync();
Cancel Batch
Cancels an in-progress batch. The batch will be in status cancelling for up to 10 minutes, before changing to cancelled, where it will have partial results (if any) available in the output file.
using var api = new OpenAIClient();
var isCancelled = await api.BatchEndpoint.CancelBatchAsync(batch);
Assert.IsTrue(isCancelled);
Embeddings
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
Related guide: Embeddings
The Edits API is accessed via OpenAIClient.EmbeddingsEndpoint
Create Embeddings
Creates an embedding vector representing the input text.
using var api = new OpenAIClient();
var response = await api.EmbeddingsEndpoint.CreateEmbeddingAsync("The food was delicious and the waiter...", Models.Embedding_Ada_002);
Console.WriteLine(response);
Moderations
Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
Related guide: Moderations
The Moderations API can be accessed via OpenAIClient.ModerationsEndpoint
Create Moderation
Classifies if text violates OpenAI's Content Policy.
using var api = new OpenAIClient();
var isViolation = await api.ModerationsEndpoint.GetModerationAsync("I want to kill them.");
Assert.IsTrue(isViolation);
Additionally you can also get the scores of a given input.
using var api = new OpenAIClient();
var response = await api.ModerationsEndpoint.CreateModerationAsync(new ModerationsRequest("I love you"));
Assert.IsNotNull(response);
Console.WriteLine(response.Results?[0]?.Scores?.ToString());
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net6.0
- No dependencies.
NuGet packages (11)
Showing the top 5 NuGet packages that depend on OpenAI-DotNet:
Package | Downloads |
---|---|
OpenAI-DotNet-Proxy
A simple Proxy API gateway for OpenAI-DotNet to make authenticated requests from a front end application without exposing your API keys. |
|
WK.OpenAiWrapper
Package Description |
|
NewLeadAI
A client to interact with NewLead AI API |
|
AICore.Core
Package Description |
|
Universal.Tools.Core.Definitions.Extensions.OpenAI
Package Description |
GitHub repositories (3)
Showing the top 3 popular GitHub repositories that depend on OpenAI-DotNet:
Repository | Stars |
---|---|
dkgv/pinpoint
Keystroke launcher and personal command central. Alternative to Spotlight and Alfred for Windows. Alternative to Wox, PowerToys.
|
|
SlimeNull/OpenGptChat
An OpenAI Chat completion Client. 一个 OpenAI 聊天 Completion 客户端.
|
|
BoiHanny/vrcosc-magicchatbox
The ultimate companion, whether you're on desktop or in VR, we've got you covered with our handy integrations in a compact and modern UI
|
Version | Downloads | Last updated | |
---|---|---|---|
8.4.1 | 2,666 | 11/15/2024 | |
8.4.0 | 227 | 11/15/2024 | |
8.3.0 | 30,874 | 9/19/2024 | |
8.2.5 | 4,813 | 9/14/2024 | |
8.2.4 | 702 | 9/14/2024 | |
8.2.2 | 14,796 | 8/19/2024 | |
8.2.1 | 338 | 8/19/2024 | |
8.2.0 | 592 | 8/18/2024 | |
8.1.2 | 9,456 | 8/9/2024 | |
8.1.1 | 38,793 | 6/30/2024 | |
8.1.0 | 8,951 | 6/21/2024 | |
8.0.3 | 1,193 | 6/16/2024 | |
8.0.2 | 1,646 | 6/15/2024 | |
8.0.1 | 14,243 | 6/10/2024 | |
8.0.0 | 197 | 6/10/2024 | |
7.7.8 | 47,281 | 5/4/2024 | |
7.7.7 | 16,212 | 4/21/2024 | |
7.7.6 | 15,449 | 3/19/2024 | |
7.7.5 | 14,604 | 3/3/2024 | |
7.7.4 | 1,725 | 2/29/2024 | |
7.7.3 | 889 | 2/27/2024 | |
7.7.2 | 1,299 | 2/27/2024 | |
7.7.1 | 1,851 | 2/25/2024 | |
7.7.0 | 3,416 | 2/22/2024 | |
7.6.5 | 8,755 | 2/6/2024 | |
7.6.4 | 2,297 | 1/29/2024 | |
7.6.3 | 608 | 1/26/2024 | |
7.6.2 | 9,969 | 1/14/2024 | |
7.6.1 | 3,291 | 1/6/2024 | |
7.6.0 | 7,548 | 1/2/2024 | |
7.5.0 | 3,749 | 12/22/2023 | |
7.4.4 | 15,191 | 12/10/2023 | |
7.4.3 | 1,132 | 12/7/2023 | |
7.4.2 | 1,262 | 12/7/2023 | |
7.4.1 | 4,714 | 12/3/2023 | |
7.4.0 | 2,182 | 11/30/2023 | |
7.3.8 | 631 | 11/29/2023 | |
7.3.7 | 859 | 11/28/2023 | |
7.3.6 | 457 | 11/28/2023 | |
7.3.5 | 729 | 11/27/2023 | |
7.3.4 | 4,811 | 11/24/2023 | |
7.3.3 | 1,165 | 11/23/2023 | |
7.3.2 | 876 | 11/22/2023 | |
7.3.1 | 5,201 | 11/21/2023 | |
7.3.0 | 467 | 11/21/2023 | |
7.2.3 | 5,115 | 11/12/2023 | |
7.2.2 | 3,072 | 11/10/2023 | |
7.2.1 | 507 | 11/9/2023 | |
7.2.0 | 3,689 | 11/9/2023 | |
7.1.0 | 833 | 11/7/2023 | |
7.0.10 | 7,520 | 10/7/2023 | |
7.0.9 | 13,402 | 8/27/2023 | |
7.0.8 | 2,404 | 8/25/2023 | |
7.0.5 | 5,742 | 8/10/2023 | |
7.0.4 | 5,746 | 7/27/2023 | |
7.0.3 | 11,143 | 6/21/2023 | |
7.0.2 | 937 | 6/19/2023 | |
7.0.1 | 1,953 | 6/17/2023 | |
7.0.0 | 902 | 6/17/2023 | |
6.8.7 | 9,573 | 5/21/2023 | |
6.8.6 | 677 | 5/19/2023 | |
6.8.5 | 662 | 5/19/2023 | |
6.8.3 | 1,461 | 5/16/2023 | |
6.8.2 | 736 | 5/15/2023 | |
6.8.1 | 2,720 | 5/7/2023 | |
6.8.0 | 4,489 | 4/30/2023 | |
6.7.4 | 1,102 | 4/27/2023 | |
6.7.3 | 1,201 | 4/26/2023 | |
6.7.2 | 1,359 | 4/23/2023 | |
6.7.1 | 8,607 | 4/13/2023 | |
6.7.0 | 1,966 | 4/10/2023 | |
6.6.0 | 64,108 | 4/4/2023 | |
6.5.3 | 2,688 | 3/29/2023 | |
6.5.2 | 802 | 3/29/2023 | |
6.5.1 | 1,339 | 3/27/2023 | |
6.5.0 | 2,565 | 3/26/2023 | |
6.4.3 | 715 | 3/26/2023 | |
6.4.2 | 1,203 | 3/26/2023 | |
6.4.1 | 2,834 | 3/24/2023 | |
6.4.0 | 879 | 3/23/2023 | |
6.3.2 | 2,879 | 3/22/2023 | |
6.3.1 | 5,787 | 3/17/2023 | |
6.3.0 | 1,673 | 3/17/2023 | |
6.2.0 | 776 | 3/16/2023 | |
6.1.0 | 3,326 | 3/14/2023 | |
6.0.1 | 1,801 | 3/12/2023 | |
6.0.0 | 1,108 | 3/11/2023 | |
5.1.2 | 780 | 3/10/2023 | |
5.1.1 | 801 | 3/9/2023 | |
5.1.0 | 1,133 | 3/8/2023 | |
5.0.2 | 1,619 | 3/6/2023 | |
5.0.1 | 1,345 | 3/2/2023 | |
5.0.0 | 970 | 3/2/2023 | |
4.4.4 | 2,416 | 2/18/2023 | |
4.4.3 | 1,846 | 2/10/2023 | |
4.4.2 | 1,303 | 2/7/2023 | |
4.4.1 | 974 | 2/4/2023 | |
4.4.0 | 750 | 2/4/2023 | |
4.3.0 | 988 | 1/31/2023 | |
4.2.0 | 946 | 1/28/2023 | |
4.1.0 | 851 | 1/27/2023 | |
4.0.2 | 1,199 | 1/20/2023 | |
4.0.1 | 1,043 | 1/17/2023 | |
4.0.0 | 2,342 | 1/9/2023 | |
3.0.1 | 8,300 | 4/14/2022 | |
3.0.0 | 2,400 | 6/20/2021 | |
2.0.1 | 955 | 5/29/2021 | |
2.0.0 | 1,026 | 5/29/2021 | |
1.0.1 | 969 | 5/2/2021 | |
1.0.0 | 1,153 | 5/1/2021 |
Version 8.0.0
- Updated Assistants Beta v2
- Added support for specifying project id
- Added BatchEndpoint
- Added VectorStoresEndpoint
- Added Message.ctr to specify specific tool call id, function name, and content
- Renamed OpenAI.Images.ResponseFormat to OpenAI.Images.ImageResponseFormat
- Changed ThreadEndpoint.CancelRunAsync return type from RunResponse to bool
- Fixed Json defined Tools/Functions being improperly added to tool cache
- Added Tool.TryUnregisterTool to remove a tool from the cache
Version 7.7.8
- Updated OpenAIClientSettings.ctr to allow for domain http protocol override (i.e. http://localhost:8080 or http://0.0.0.0:8080/)
- Updated OpenAIClientSettings.BaseRequest public for easier access when implementing custom proxies.
- Updated OpenAIClientSettings.IsAzureDeployment public for easier access when implementing custom proxies.
Version 7.7.7
- Updated static models list
- Added gpt-4-turbo
- Marked some models as deprecated since they are no longer available
- Added temperature to CreateRunRequest and CreateThreadAndRunRequest
- Fixed temperature to string conversion to be invariant culture for audio requests
- Fixed type checking built in function tool calls
- Fixed duplicate registration of function tool calls
Version 7.7.6
- Added support for Audio Transcription and Translation verbose json output
- Added support for timestamp granularities for segments and words
- Added AudioResponse
- Marked CreateTranscriptionAsync obsolete
- Added CreateTranscriptionTextAsync
- Added CreateTranscriptionJsonAsync
- Marked CreateTranspationAsync obsolete
- Added CreateTranslationTextAsync
- Added CreateTranslationJsonAsync
- Updated SpeechResponseFormat to include wav and pcm
Version 7.7.5
- Allow FunctionPropertyAttribute to be assignable to fields
- Updated Function schema generation
- Fall back to complex types, and use $ref for discovered types
- Fixed schema generation to properly assign unsigned integer types
Version 7.7.4
- Fixed Threads.RunResponse.WaitForStatusChangeAsync timeout
Version 7.7.3
- Updated ChatRequest toolChoice to only send type and name of function, reducing token usage
Version 7.7.2
- Added FunctionParameterAttribute to help better inform the feature how to format the Function json
Version 7.7.1
- More Function utilities and invoking methods
- Added FunctionPropertyAttribute to help better inform the feature how to format the Function json
- Added FromFunc<,> overloads for convenance
- Fixed invoke args sometimes being casting to wrong type
- Added additional protections for static and instanced function calls
- Added additional tool utilities:
- Tool.ClearRegisteredTools
- Tool.IsToolRegistered(Tool) - Tool.TryRegisterTool(Tool)
- Improved memory usage and performance by propertly disposing http content and response objects
- Updated debug output to be formatted to json for easier reading and debugging
Version 7.7.0
- Added Tool call and Function call Utilities and helper methods
- Added FunctionAttribute to decorate methods to be used in function calling
- Chat.Message.ToolCalls can be directly invoked using Function.Invoke() or Function.InvokeAsync(CancellationToken)
- Assistant tool call outputs can be easily generated using assistnat.GetToolOutputAsync(run.RequiredAction.SubmitToolOutputs.ToolCalls)
- Check updated docs for more details and examples
- Fixed ChatRequest seed parameter not being set correctly
Version 7.6.5
- Updated api key prefix checks to only be enforced for OpenAI domain
Version 7.6.4
- Removed obsolete completions and edit endpoints
Version 7.6.3
- Added RetrieveFileStreamAsync method to Files.FilesEndpoint
- Added new Embedding Models
- Added Model.Dimensions property
- Added Threads.Run and Threads.RunStep Usage properties
- Added CodeInterpreter Outputs to RunStepDetails.ToolCalls
- Added Retrieval Outputs to RunStepDetails.ToolCalls
Version 7.6.2
- Fixed parameter name in Threads.CreateMessageRequest
- Added Stream overload to Threads.FileUploadRequest
Version 7.6.1
- Include Output in Threads.FunctionCall
Version 7.6.0
- Changed License to MIT
- Added OpenAI.Chat logprob parameters
- Added SourceLink references for debugging
- Added Docfx build workflow
Version 7.5.0
- Changed OpenAIClient to implement IDisposable.
- Disposing OpenAICLient is now required if you're not passing a custom HttpClient.
- If passing an custom HttpClient, it will need to be expressly disposed after use.
- Updated Chat.Message.CopyFrom Content check from string.IsNullOrEmpty to null check.
Version 7.4.4
- Updated Docs
Version 7.4.3
- Updated FileResponse.Size int -> int?
Version 7.4.2
- Fixed missing Threads.Message.Content.ImageFile property.
- Marked OpenAI.Completions Obsolete
Version 7.4.1
- Fixed AssistantExtension.UploadFileAsync spelling error with file purpose.
Version 7.4.0
- Refactored OpenAI.Threads.LastRunError -> OpenAI.Error for more generic use in future.
- Fixed OpenAI.Threads.Annotations namespace
- Fixed OpenAI.Threads.ContextText namespace
Version 7.3.8
- Added Chat.Content.ctr overloads and implicit casting for easier usage
- Internal refactoring of FilesEndpoint.DeleteFileAsync (better status checking)
- Internal refactoring of FineTuningEndpoint to ensure we're properly setting response data
- Updated unit tests
- Updated docs
Version 7.3.7
- Fixes streaming with tools not being property copied over
Version 7.3.6
- Fixed ArgumentOutOfRangeException when streaming chat completion response contains more than one tool
Version 7.3.5
- Added GetModerationChunkedAsync method in ModerationsEndpoint
- Fixed streaming function tool serialization
Version 7.3.4
- Fixed AudioTranslationRequest.Temperature type. int? -> float?
Version 7.3.3
- Fixed Threads.FileCitation json property name
Version 7.3.2
- Added detail parameter to ImageURL
Version 7.3.1
- Fixed json serialization settings when EnableDebug is disabled
Version 7.3.0
- Added AgentsEndpoint
- Added ThreadsEndpoint
- Updated ImagesEndpoint return types to ImageResult list
- Updated FilesEndpoint.ListFilesAsync with optional purpose filter query parameter.
- Refactored list responses with a more generic ListQuery and ListResponse<TObject> pattern
- EventList -> ListResponse<EventResponse>
- FineTuneJobList -> ListResponse<FineTuneJobResponse>
- Standardized names for timestamps to have suffix: UnixTimeSeconds
- Standardized response class names (existing classes depreciated)
- FileData -> FileResponse
- CompletionResult -> CompletonResponse
- Event -> EventResponse
- FineTuneJob -> FineTuneJobResponse
Version 7.2.3
- Added support for reading RateLimit information from the Headers
Version 7.2.2
- Fixed Image Generation for Azure
Version 7.2.1
- Fixed a NRE with chat Message.ToString call when dynamic content is json element
- Removed improper set accessors for Function.Arguments and Function.Parameters properties
- Removed improper ChatResponse constructor
- Fixed unit test names
- Updated docs
Version 7.2.0
- Updated chat endpoint features
- json mode
- gpt-vision
- reproducible outputs
- tool functions
Version 7.1.0
- Convert Fine Tuning endpoint to latest (Breaking Change!)
- Added Text to Speech endpoint
- Updated Image endpoints with model parameters and support for Dall E 3
- Removed Model type checks, and now lets api handle errors
Version 7.0.10
- Fixed processing time string culture conversion when parsing double
Version 7.0.9
- Fixed Model delete permission Unauthorized Access check
Version 7.0.8
- Fixed AudioTranscriptionRequest.Temperature type. int? -> float?
- Updated Moderations Categories and Scores with new metrics
Version 7.0.5
- Fixed Message.Content serialization in Role.Function message history
Version 7.0.4
- Fixed ChatRequest forced function calls
Version 7.0.3
- Fixed chat streaming message copy from delta
Version 7.0.2
- Only set response header properties if they exist
- Remove OpenAIClient.ctr overload
Version 7.0.1
- Fixed streaming chat functions
Version 7.0.0
- Added function calling to chat models
Version 6.8.7
- Added ToString and string operator to Moderation Scores
Version 6.8.6
- Populated finish reason in streaming chat final message content
Version 6.8.5
- Updated all method calls to take a Model as string
Version 6.8.3
- Revert BaseEndpoint.GetUrl changes
Version 6.8.2
- Misc internal fixes, formatting, and docs
Version 6.8.1
- Updated basic and chat completions choices to default to empty string.
- Fixed Completions.CompletionResult.ToString first completion index lookup
- Update the HttpClient creation to set the PooledConnectionLifetime property per Microsofts recommendation.
Version 6.8.0
- Removed Obsolete ChatPrompt
- ChatEndpoint.StreamCompletionAsync will now also raise additional ChatResponse with completed Message
- ChatEndpoint.StreamCompletionEnumerableAsync will now also raise additional ChatResponse with completed Message
- Refactored all streaming endpoints to use a new string extension for centralized parsing of event stream data
- Added optional paramter cancelJob to FineTuningEndpoint.StreamFineTuneEventsEnumerableAsync. Default is false.
- Added optional parameter cancelJob to FineTuningEndpoint.StreamFineTuneEventsAsync. Default is false.
- Added optional parameter deleteCachedFile to FileEndpoint.DownloadFileAsync. Defaults to false.
- Updated Completions.LogProbabilities.TopLogProbabilities to properly use immutable IReadOnlyList<IReadOnlyDictionary<string, double>>
Version 6.7.4
- Fixed Model.Permissions
- Added Model.CreatedAt
Version 6.7.3
- added missing IDisposable to audio requests
Version 6.7.2
- Made it easier to specify a specific configuration file path
- Added optional author name property to chat message
- Added implicit string conversions to make ChatResponses easier to work with
- Updated Docs
Version 6.7.1
- Fixed parsing old env file format
- Fixed parsing missing ORGANIZATION env variables
- Fixed checking of CancellationToken.IsCancellationRequested in streaming endpoints
- Updated Docs
Version 6.7.0
- Deprecated ChatPrompt -> Message
- Added Role enum for Chat.Messages and Chat.Delta
- Updated ChatRequest constructor to use IEnumerable<Message> messages
- Updated ChatRequest.Messages to IReadonlyList<Message>
- Updated unit tests
Version 6.6.0
- Added ResponseFormat to ImageGenerationRequests
- Refactored Image Requests with AbstractBaseImageRequest
Version 6.5.3
- Added missing ConfigureAwait to await calls
Version 6.5.2
- Updated SetResponseData to better reflect the difference between OpenAI and Azure responses.
- Updated ProcessingTime parsing from int to double
Version 6.5.1
- Removed Obsolete from EditEndpoint as it has now been fixed by OpenAI
Version 6.5.0
- Marked EditEndpoint Obsolete as codex and edit models have been removed
Version 6.4.3
- Fixed support for Azure Active Directory authentication for Azure OpenAI
Version 6.4.2
- Misc fixes and added validation for OpenAICLientSettings
- Updated docs
- Decoupled proxy version from main package
Version 6.4.1
- Added ImageEditRequest overloads for optional mask parameter
Version 6.4.0
- Moved OpenAI-DotNet-Proxy back into its own project and package
- Make a few classes sealed that are not meant to be extended
Version 6.3.2
- Attempt to fix dependency requirement for dotnet/runtime docker base images
- Made internal OpenAIClient constructor with HttpClient public
- Make sure we only copy the appropriate headers in the proxy
Version 6.3.1
- Fixed apikey requiring sk- prefix with Azure OpenAI
Version 6.3.0
- Removed OpenAI-DotNet-Proxy and put it directly in package on its own
Version 6.2.0
- Added OpenAI-DotNet-Proxy project and package.
- Added support for custom domains
- Updated unit tests
- Updated docs
Version 6.1.0
- Added support for gpt-4 models
Version 6.0.1
- Updated package info
- Updated docs
Version 6.0.0
- Added support for Azure OpenAI
Version 5.1.2
- Fixed an issue when deleting personal account fine tuned models
Version 5.1.1
- Refactored Model validation
- Added additional default models
- Deprecate OpenAIClient.DefaultModel
- Implemented chat completion streaming
- Refactored immutable types
Version 5.1.0
- Add support for Audio endpoint and Whisper models
- Audio Speech to text
- Audio translation
Version 5.0.2
- Support multiple inputs in embedding
- Added better model validation in all endpoints
Version 5.0.1
- Fixed chat parameters
Version 5.0.0
- Added Chat endpoint
Version 4.4.4
- ImageEditRequest mask is now optional so long as texture has alpha transparency
- ImageVariationRequest added constructor overload for memory stream image
- Updated AuthInfo parameter validation
- Renamed OPEN_AI_ORGANIZATION_ID -> OPENAI_ORGANIZATION_ID
Version 4.4.3
- added OPEN_AI_ORGANIZATION_ID environment variable
- deprecated Organization use OrganizationId instead
Version 4.4.2
- Removed a useless assert
- Updated docs
Version 4.4.1
- hotfix to CompletionsEndpoint to use IEnumerable<string>
- hotfix to cleanup Images endpoints
Version 4.4.0
- Renamed Choice.Logprobs -> Choice.LogProbabilities
- Renamed OpenAI.Completions.Logprobs -> OpenAI.Completions.OpenAI.Completions
- Renamed CompletionRequest parameter names:
- max_tokens -> maxTokens
- top_p -> topP
- Updated CompletionRequest to accept IEnumerable<string> values for prompts and stopSequences
- Refactored all endpoints to use new response validation extension
- Added CancellationToken to most endpoints that had long running operations