TimHanewich.Foundry 0.1.0

There is a newer version of this package available.
See the version list below for details.
dotnet add package TimHanewich.Foundry --version 0.1.0
                    
NuGet\Install-Package TimHanewich.Foundry -Version 0.1.0
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="TimHanewich.Foundry" Version="0.1.0" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="TimHanewich.Foundry" Version="0.1.0" />
                    
Directory.Packages.props
<PackageReference Include="TimHanewich.Foundry" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add TimHanewich.Foundry --version 0.1.0
                    
#r "nuget: TimHanewich.Foundry, 0.1.0"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package TimHanewich.Foundry@0.1.0
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=TimHanewich.Foundry&version=0.1.0
                    
Install as a Cake Addin
#tool nuget:?package=TimHanewich.Foundry&version=0.1.0
                    
Install as a Cake Tool

TimHanewich.Foundry

banner

A lightweight .NET library for interfacing with LLM deployments in Microsoft Foundry (formerly Azure AI Foundry)!

Example Use

Below are some examples on how to use this library:

Basic Prompting

Below shows the basic set up and prompting process:

using TimHanewich.Foundry.OpenAI.Responses;

//Define the deployment
Deployment d = new Deployment();
d.Endpoint = "https://ai-testaistudio020597089470.openai.azure.com/openai/responses?api-version=2025-04-01-preview";
d.ApiKey = "Ax5hHeaVUqSipUxMkr...";

//Create a response request (uses the Responses API)
ResponseRequest rr = new ResponseRequest();
rr.Model = "gpt-5-mini-testing"; //the name of your particular deployment in Foundry
rr.Inputs.Add(new Message(Role.developer, "Talk like a cowboy.")); //system prompt
rr.Inputs.Add(new Message(Role.user, "Hi! Why is the sky blue?")); //user prompt

//Call to API service
Response r = d.CreateResponseAsync(rr).Result;

//Print response info
Console.WriteLine("Response ID: " + r.Id);
Console.WriteLine("Input tokens consumed: " + r.InputTokensConsumed.ToString());
Console.WriteLine("Output tokens consumed: " + r.OutputTokensConsumed.ToString());
foreach (Exchange exchange in r.Outputs) //loop through all outputs (output could be a message, function call, etc.)
{
    if (exchange is Message msg) //if this output is a Message
    {
        Console.WriteLine("Response: " + msg.Text);
    }
}

This will result in:

Response ID: resp_05c65468f65bdb3c006950294d66948196ac0afea12bfba22d
Input tokens consumed: 79
Output tokens consumed: 374
Response: Howdy partner — reckon the sky’s blue ‘cause of a little thing called Rayleigh scatterin’. Sunlight’s made up of all colors, but when it hits the tiny air molecules up yonder, the shorter wavelengths (blues and violets) get scattered much more than the longer reds. The amount scattered goes up steep-like with shorter wavelength (about 1 over wavelength to the fourth power), so blue light gets tossed around all over the place and fills the sky.

Now, you might ask why it don’t look violet if violet scatters even more — well, the Sun gives off less violet light, and our eyes ain’t as keen on violet, so blue wins out. At sunrise and sundown the sunlight travels through more air, scatterin’ away the blues and lettin’ the reds and oranges ride in, which is why them sunsets are fiery.

Follow-up Message

To then continue the conversation with a follow up message, you must specify the previous_response_id property:

using TimHanewich.Foundry.OpenAI.Responses;

//Define the deployment
Deployment d = new Deployment();
d.Endpoint = "https://ai-testaistudio020597089470.openai.azure.com/openai/responses?api-version=2025-04-01-preview";
d.ApiKey = "Ax5hHeaVUqSipUxMkr...";

//Create a response request (uses the Responses API)
ResponseRequest rr = new ResponseRequest();
rr.Model = "gpt-5-mini-testing"; //the name of your particular deployment in Foundry
rr.PreviousResponseID = "resp_05c65468f65bdb3c006950294d66948196ac0afea12bfba22d"; //previous response ID specifies the conversation history
rr.Inputs.Add(new Message(Role.user, "I'm still not getting it. Can you explain it to me like I am 5 years old?")); //user message

//Call to API service
Response r = d.CreateResponseAsync(rr).Result;

//Print response info
Console.WriteLine("Response ID: " + r.Id);
Console.WriteLine("Input tokens consumed: " + r.InputTokensConsumed.ToString());
Console.WriteLine("Output tokens consumed: " + r.OutputTokensConsumed.ToString());
foreach (Exchange exchange in r.Outputs) //loop through all outputs (output could be a message, function call, etc.)
{
    if (exchange is Message msg) //if this output is a Message
    {
        Console.WriteLine("Response: " + msg.Text);
    }
}

This will result in:

Response ID: resp_05c65468f65bdb3c00695029d682908196a7ce63e0b59f62aa
Input tokens consumed: 285
Output tokens consumed: 497
Response: Well howdy, little pardner. Imagine sunlight is a big box o’ crayons with every color in it. When that light comes down through the air, tiny invisible things in the sky like to bump the colors around. The blue crayons are like tiny, bouncy marbles that get bumped and scattered every which way, so blue fills the whole sky for us to see. The red crayons are bigger and don’t get bounced around as much, so they mostly keep goin’ straight.

When the sun is risin’ or settin’, its light has to travel through lots more air, so most of the blue marbles get scattered away before they reach our eyes — that’s why the sky looks orange and red then. And don’t worry ‘bout violet; our eyes don’t see it as well, so blue looks brightest to us.

Function Calling

You can also achieve function-calling functionality (a.k.a. tool calling) like so:

using TimHanewich.Foundry.OpenAI.Responses;

//Define the deployment
Deployment d = new Deployment();
d.Endpoint = "https://ai-testaistudio020597089470.openai.azure.com/openai/responses?api-version=2025-04-01-preview";
d.ApiKey = "Ax5hHeaVUqSipUxMkr...";

//Create a response request (uses the Responses API)
ResponseRequest rr = new ResponseRequest();
rr.Model = "gpt-5-mini-testing"; //the name of your particular deployment in Foundry
rr.Inputs.Add(new Message(Role.user, "What is the weather in 98004?")); //user message

//Add the "CheckWeather" tool as a tool (function) the model has available to it
Tool CheckWeather = new Tool();
CheckWeather.Name = "CheckWeather";
CheckWeather.Description = "Check the weather for any zip code.";
CheckWeather.Parameters.Add(new ToolInputParameter("zip_code", "Zip code of the area you want to check the weather for"));
rr.Tools.Add(CheckWeather);

//Call to API service
Response r = d.CreateResponseAsync(rr).Result;

//Print response info
Console.WriteLine("Response ID: " + r.Id);
Console.WriteLine("Input tokens consumed: " + r.InputTokensConsumed.ToString());
Console.WriteLine("Output tokens consumed: " + r.OutputTokensConsumed.ToString());
foreach (Exchange exchange in r.Outputs) //loop through all outputs (output could be a message, function call, etc.)
{
    if (exchange is Message msg) //if this output is a Message
    {
        Console.WriteLine("Response: " + msg.Text);
    }
    else if (exchange is ToolCall tc) //if it is a tool call
    {
        Console.WriteLine();
        Console.WriteLine("Tool call received:");
        Console.WriteLine("Tool Name: " + tc.ToolName);
        Console.WriteLine("Tool Call ID: " + tc.CallId);
        Console.WriteLine("Arguments: " + tc.Arguments.ToString(Formatting.None));
    }
}

This will result in:

Response ID: resp_0c5335a67e04df960069502ab72a108194bffc93b794cc1a97
Input tokens consumed: 71
Output tokens consumed: 22

Tool call received:
Tool Name: CheckWeather
Tool Call ID: call_GYUF82w0DDdrV3Yf1YJo22OW
Arguments: {"zip_code":"98004"}

Providing a Tool Call Output (Result)

After the model decides to make a tool call, you must provide it with the result of the tool call. After getting that result, you provide the result like so:

using TimHanewich.Foundry.OpenAI.Responses;

//Define the deployment
Deployment d = new Deployment();
d.Endpoint = "https://ai-testaistudio020597089470.openai.azure.com/openai/responses?api-version=2025-04-01-preview";
d.ApiKey = "Ax5hHeaVUqSipUxMkr...";

//Create a response request (uses the Responses API)
ResponseRequest rr = new ResponseRequest();
rr.Model = "gpt-5-mini-testing"; //the name of your particular deployment in Foundry
rr.PreviousResponseID = "resp_0c5335a67e04df960069502ab72a108194bffc93b794cc1a97"; //previous response ID (the response that contained the tool call) - this provides conversational history!

//Add the results of the "CheckWeather" tool 
rr.Inputs.Add(new ToolCallOutput("call_GYUF82w0DDdrV3Yf1YJo22OW", "{'temperature': 72.4, 'humidity': 55.4, 'precipitation_inches': 2.4}"));

//Add the "CheckWeather" tool as a tool (function) the model has available to it
Tool CheckWeather = new Tool();
CheckWeather.Name = "CheckWeather";
CheckWeather.Description = "Check the weather for any zip code.";
CheckWeather.Parameters.Add(new ToolInputParameter("zip_code", "Zip code of the area you want to check the weather for"));
rr.Tools.Add(CheckWeather);

//Call to API service
Response r = d.CreateResponseAsync(rr).Result;

//Print response info
Console.WriteLine("Response ID: " + r.Id);
Console.WriteLine("Input tokens consumed: " + r.InputTokensConsumed.ToString());
Console.WriteLine("Output tokens consumed: " + r.OutputTokensConsumed.ToString());
foreach (Exchange exchange in r.Outputs) //loop through all outputs (output could be a message, function call, etc.)
{
    if (exchange is Message msg) //if this output is a Message
    {
        Console.WriteLine("Response: " + msg.Text);
    }
    else if (exchange is ToolCall tc) //if it is a tool call
    {
        Console.WriteLine();
        Console.WriteLine("Tool call received:");
        Console.WriteLine("Tool Name: " + tc.ToolName);
        Console.WriteLine("Tool Call ID: " + tc.CallId);
        Console.WriteLine("Arguments: " + tc.Arguments.ToString(Formatting.None));
    }
}

It is not shown above, but you can also specify if each parameter is required or not when declaring each parameter as a ToolInputParameter

This will result in:

Response ID: resp_0c5335a67e04df960069502bbd47f88194810ea8e510dba891
Input tokens consumed: 180
Output tokens consumed: 91
Response: Here’s the current weather for ZIP code 98004:

- Temperature: 72.4°F
- Humidity: 55.4%
- Precipitation (recent/accumulated): 2.4 inches

If you’d like a forecast (hourly or 7-day), current conditions summary (wind, sky description), or conversion to °C, tell me which and I’ll pull that for you.

Getting Structured Outputs ("JSON Mode")

You can request a structured output, as JSON, like so:

using TimHanewich.Foundry.OpenAI.Responses;

//Define the deployment
Deployment d = new Deployment();
d.Endpoint = "https://ai-testaistudio020597089470.openai.azure.com/openai/responses?api-version=2025-04-01-preview";
d.ApiKey = "Ax5hHeaVUqSipUxMkr...";

//Create a response request (uses the Responses API)
ResponseRequest rr = new ResponseRequest();
rr.Model = "gpt-5-mini-testing"; //the name of your particular deployment in Foundry
rr.Inputs.Add(new Message(Role.user, "Parse out the first and last name and provide it to me in JSON like this format, as an example: {'first': 'Ron', 'last': 'Weasley'}.\n\n'Hi, my name is Harold Gargon."));
rr.RequestedFormat = ResponseFormat.JsonObject; //specify you want a JSON object output ('JSON mode')

//Call to API service
Response r = d.CreateResponseAsync(rr).Result;

//Print response info
Console.WriteLine("Response ID: " + r.Id);
Console.WriteLine("Input tokens consumed: " + r.InputTokensConsumed.ToString());
Console.WriteLine("Output tokens consumed: " + r.OutputTokensConsumed.ToString());
foreach (Exchange exchange in r.Outputs) //loop through all outputs (output could be a message, function call, etc.)
{
    if (exchange is Message msg) //if this output is a Message
    {
        Console.WriteLine("Response: " + msg.Text);
    }
}

This will result in the following:

Response ID: resp_0e2cc18156f6cc050069502ce6bcd48195ba166413e74a5a6d
Input tokens consumed: 52
Output tokens consumed: 212
Response: {"first": "Harold", "last": "Gargon"}

Note, the OpenAI responses API also supports the json_schema format in which you can specify an exact schema it must conform to - but that is not supported in this library yet!

Other Resources

Product Compatible and additional computed target framework versions.
.NET net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
0.4.1 86 2/25/2026
0.4.0 107 1/3/2026
0.3.0 100 1/3/2026
0.2.0 99 1/2/2026
0.1.0 99 12/27/2025