Mscc.GenerativeAI 1.9.5

There is a newer version of this package available.
See the version list below for details.
dotnet add package Mscc.GenerativeAI --version 1.9.5                
NuGet\Install-Package Mscc.GenerativeAI -Version 1.9.5                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Mscc.GenerativeAI" Version="1.9.5" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Mscc.GenerativeAI --version 1.9.5                
#r "nuget: Mscc.GenerativeAI, 1.9.5"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Mscc.GenerativeAI as a Cake Addin
#addin nuget:?package=Mscc.GenerativeAI&version=1.9.5

// Install Mscc.GenerativeAI as a Cake Tool
#tool nuget:?package=Mscc.GenerativeAI&version=1.9.5                

Gemini AI Client for .NET and ASP.NET Core

GitHub GitHub last commit MsccGenerativeAI GitHub stars

Access and integrate the Gemini API into your .NET applications. The packages support both Google AI Studio and Google Cloud Vertex AI.

Name Package Status
Client for .NET Mscc.GenerativeAI NuGet VersionNuGet Downloads
Client for ASP.NET (Core) Mscc.GenerativeAI.Web NuGet VersionNuGet Downloads
Client for .NET using Google API Client Library Mscc.GenerativeAI.Google NuGet VersionNuGet Downloads
Client for Microsoft.Extensions.AI and Semantic Kernel Mscc.GenerativeAI.Microsoft NuGet VersionNuGet Downloads

Read more about Mscc.GenerativeAI.Web and how to add it to your ASP.NET (Core) web applications. Read more about Mscc.GenerativeAI.Google.

Install the package 🖥️

Install the package Mscc.GenerativeAI from NuGet. You can install the package from the command line using either the command line or the NuGet Package Manager Console. Or you add it directly to your .NET project.

Add the package using the dotnet command line tool in your .NET project folder.

> dotnet add package Mscc.GenerativeAI

Working with Visual Studio use the NuGet Package Manager to install the package Mscc.GenerativeAI.

PM> Install-Package Mscc.GenerativeAI

Alternatively, add the following line to your .csproj file.

  <ItemGroup>
    <PackageReference Include="Mscc.GenerativeAI" Version="1.9.5" />
  </ItemGroup>

You can then add this code to your sources whenever you need to access any Gemini API provided by Google. This package works for Google AI (Google AI Studio) and Google Cloud Vertex AI.

Features (as per Gemini analysis) ✦

The provided code defines a C# library for interacting with Google's Generative AI models, specifically the Gemini models. It provides functionalities to:

  • List available models: This allows users to see which models are available for use.
  • Get information about a specific model: This provides details about a specific model, such as its capabilities and limitations.
  • Generate content: This allows users to send prompts to a model and receive generated text in response.
  • Generate content stream: This allows users to receive a stream of generated text from a model, which can be useful for real-time applications.
  • Generate a grounded answer: This allows users to ask questions and receive answers that are grounded in provided context.
  • Generate embeddings: This allows users to convert text into numerical representations that can be used for tasks like similarity search.
  • Count tokens: This allows users to estimate the cost of using a model by counting the number of tokens in a prompt or response.
  • Start a chat session: This allows users to have a back-and-forth conversation with a model.
  • Create tuned models: This allows users to provide samples for tuning an existing model. Currently, only the text-bison-001 and gemini-1.0-pro-001 models are supported for tuning
  • File API: This allows users to upload large files and use them with Gemini 1.5.

The package also defines various helper classes and enums to represent different aspects of the Gemini API, such as model names, request parameters, and response data.

Authentication use cases 👥

The package supports the following use cases to authenticate.

API Authentication Remarks
Google AI Authentication with an API key
Google AI Authentication with OAuth required for tuned models
Vertex AI Authentication with Application Default Credentials (ADC)
Vertex AI Authentication with Credentials by Metadata Server requires access to a metadata server
Vertex AI Authentication with OAuth using Mscc.GenerativeAI.Google
Vertex AI Authentication with Service Account using Mscc.GenerativeAI.Google

This applies mainly to the instantiation procedure.

Getting Started 🚀

Use of Gemini API in either Google AI or Vertex AI is almost identical. The major difference is the way to instantiate the model handling your prompt.

Using Environment variables

In the cloud most settings are configured via environment variables (EnvVars). The ease of configuration, their widespread support and the simplicity of environment variables makes them a very interesting option.

Variable Name Description
GOOGLE_AI_MODEL The name of the model to use (default is Model.Gemini15Pro)
GOOGLE_API_KEY The API key generated in Google AI Studio
GOOGLE_PROJECT_ID Project ID in Google Cloud to access the APIs
GOOGLE_REGION Region in Google Cloud (default is us-central1)
GOOGLE_ACCESS_TOKEN The access token required to use models running in Vertex AI
GOOGLE_APPLICATION_CREDENTIALS Path to the application credentials file.
GOOGLE_WEB_CREDENTIALS Path to a Web credentials file.

Using any environment variable provides simplified access to a model.

using Mscc.GenerativeAI;

var model = new GenerativeModel();

Choose an API and authentication mode

Google AI with an API key

using Mscc.GenerativeAI;
// Google AI with an API key
var googleAI = new GoogleAI(apiKey: "your API key");
var model = googleAI.GenerativeModel(model: Model.Gemini15Pro);

// Original approach, still valid.
// var model = new GenerativeModel(apiKey: "your API key", model: Model.GeminiPro);

Google AI with OAuth. Use gcloud auth application-default print-access-token to get the access token.

using Mscc.GenerativeAI;
// Google AI with OAuth. Use `gcloud auth application-default print-access-token` to get the access token.
var model = new GenerativeModel(model: Model.GeminiPro);
model.AccessToken = accessToken;

Vertex AI with OAuth. Use gcloud auth application-default print-access-token to get the access token.

using Mscc.GenerativeAI;
// Vertex AI with OAuth. Use `gcloud auth application-default print-access-token` to get the access token.
var vertex = new VertexAI(projectId: projectId, region: region);
var model = vertex.GenerativeModel(model: Model.Gemini15Pro);
model.AccessToken = accessToken;

The ConfigurationFixture type in the test project implements multiple options to retrieve sensitive information, i.e. API key or access token.

Using Google AI Gemini API

Working with Google AI in your application requires an API key. Get an API key from Google AI Studio.

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var prompt = "Write a story about a magic backpack.";

var model = new GenerativeModel(apiKey: apiKey, model: Model.GeminiPro);

var response = await model.GenerateContent(prompt);
Console.WriteLine(response.Text);

Using Vertex AI Gemini API

Use of Vertex AI requires an account on Google Cloud, a project with billing and Vertex AI API enabled.

using Mscc.GenerativeAI;

var projectId = "your_google_project_id"; // the ID of a project, not its name.
var region = "us-central1";     // see documentation for available regions.
var accessToken = "your_access_token";      // use `gcloud auth application-default print-access-token` to get it.
var prompt = "Write a story about a magic backpack.";

var vertex = new VertexAI(projectId: projectId, region: region);
var model = vertex.GenerativeModel(model: Model.Gemini15Pro);
model.AccessToken = accessToken;

var response = await model.GenerateContent(prompt);
Console.WriteLine(response.Text);

More examples 🪄

Supported models are accessible via the Model class. Since release 0.9.0 there is support for the previous PaLM 2 models and their functionalities.

Use system instruction

The model can be injected with a system instruction that applies to all further requests. Following is an example how to instruct the model to respond like a pirate.

var apiKey = "your_api_key";
var systemInstruction = new Content("You are a friendly pirate. Speak like one.");
var prompt = "Good morning! How are you?";
IGenerativeAI genAi = new GoogleAI(apiKey);
var model = genAi.GenerativeModel(Model.Gemini15ProLatest, systemInstruction: systemInstruction);
var request = new GenerateContentRequest(prompt);

var response = await model.GenerateContent(prompt);
Console.WriteLine(response.Text);

The response might look similar to this:

Ahoy there, matey! I be doin' finer than a freshly swabbed poop deck on this fine mornin', how about yerself?  
Shimmer me timbers, it's good to see a friendly face!  
What brings ye to these here waters?

The simplest version is to toggle the boolean property UseGrounding, like so.

var apiKey = "your_api_key";
var prompt = "What is the current Google stock price?";
var genAi = new GoogleAI(apiKey);
var model = genAi.GenerativeModel("gemini-1.5-pro-002");
model.UseGrounding = true;

var response = await model.GenerateContent(prompt);
Console.WriteLine(response.Text);

In case that you would like to have more control over the Google Search retrieval parameters, use the following approach.

var apiKey = "your_api_key";
var prompt = "Who won Wimbledon this year?";
IGenerativeAI genAi = new GoogleAI(apiKey);
var model = genAi.GenerativeModel("gemini-1.5-pro-002",
    tools: [new Tool { GoogleSearchRetrieval = 
        new(DynamicRetrievalConfigMode.ModeUnspecified, 0.06f) }]);

var response = await model.GenerateContent(prompt);
Console.WriteLine(response.Text);

In either case, the returned Candidates item type has an additional property GroundingMetadata which provides the details of the Google Search based grounding

Text-and-image input

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var prompt = "Parse the time and city from the airport board shown in this image into a list, in Markdown";
var model = new GenerativeModel(apiKey: apiKey, model: Model.GeminiVisionPro);
var request = new GenerateContentRequest(prompt);
await request.AddMedia("https://raw.githubusercontent.com/mscraftsman/generative-ai/refs/heads/main/tests/Mscc.GenerativeAI/payload/timetable.png");

var response = await model.GenerateContent(request);
Console.WriteLine(response.Text);

The part of InlineData is supported by both Google AI and Vertex AI. Whereas the part FileData is restricted to Vertex AI only.

Chat conversations

Gemini enables you to have freeform conversations across multiple turns. You can interact with Gemini Pro using a single-turn prompt and response or chat with it in a multi-turn, continuous conversation, even for code understanding and generation.

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var model = new GenerativeModel(apiKey: apiKey);    // using default model: gemini-1.5-pro
var chat = model.StartChat();   // optionally pass a previous history in the constructor.

// Instead of discarding you could also use the response and access `response.Text`.
_ = await chat.SendMessage("Hello, fancy brainstorming about IT?");
_ = await chat.SendMessage("In one sentence, explain how a computer works to a young child.");
_ = await chat.SendMessage("Okay, how about a more detailed explanation to a high schooler?");
_ = await chat.SendMessage("Lastly, give a thorough definition for a CS graduate.");

// A chat session keeps every response in its history.
chat.History.ForEach(c => Console.WriteLine($"{c.Role}: {c.Text}"));

// Last request/response pair can be removed from the history.
var latest = chat.Rewind();
Console.WriteLine($"{latest.Sent} - {latest.Received}");

Use Gemini 1.5 with large files

With Gemini 1.5 you can create multimodal prompts supporting large files.

The following example uploads one or more files via File API and the created File URIs are used in the GenerateContent call to generate text.

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var prompt = "Make a short story from the media resources. The media resources are:";
IGenerativeAI genAi = new GoogleAI(apiKey);
var model = genAi.GenerativeModel(Model.Gemini15Pro);

// Upload your large image(s).
// Instead of discarding you could also use the response and access `response.Text`.
var filePath = Path.Combine(Environment.CurrentDirectory, "verylarge.png");
var displayName = "My very large image";
_ = await model.UploadMedia(filePath, displayName);

// Create the prompt with references to File API resources.
var request = new GenerateContentRequest(prompt);
var files = await model.ListFiles();
foreach (var file in files.Where(x => x.MimeType.StartsWith("image/")))
{
    Console.WriteLine($"File: {file.Name}");
    request.AddMedia(file);
}
var response = await model.GenerateContent(request);
Console.WriteLine(response.Text);

Read more about Gemini 1.5: Our next-generation model, now available for Private Preview in Google AI Studio.

Create a tuned model

The Gemini API lets you tune models on your own data. Since it's your data and your tuned models this needs stricter access controls than API-Keys can provide.

Before you can create a tuned model, you'll need to set up OAuth for your project.

using Mscc.GenerativeAI;

var projectId = "your_google_project_id"; // the ID of a project, not its name.
var accessToken = "your_access_token";      // use `gcloud auth application-default print-access-token` to get it.
var model = new GenerativeModel(apiKey: null, model: Model.Gemini10Pro001)
{
    AccessToken = accessToken, ProjectId = projectId
};
var parameters = new HyperParameters() { BatchSize = 2, LearningRate = 0.001f, EpochCount = 3 };
var dataset = new List<TuningExample>
{    
    new() { TextInput = "1", Output = "2" },
    new() { TextInput = "3", Output = "4" },
    new() { TextInput = "-3", Output = "-2" },
    new() { TextInput = "twenty two", Output = "twenty three" },
    new() { TextInput = "two hundred", Output = "two hundred one" },
    new() { TextInput = "ninety nine", Output = "one hundred" },
    new() { TextInput = "8", Output = "9" },
    new() { TextInput = "-98", Output = "-97" },
    new() { TextInput = "1,000", Output = "1,001" },
    new() { TextInput = "thirteen", Output = "fourteen" },
    new() { TextInput = "seven", Output = "eight" },
};
var request = new CreateTunedModelRequest(Model.Gemini10Pro001, 
    "Simply autogenerated Test model",
    dataset,
    parameters);

var response = await model.CreateTunedModel(request);
Console.WriteLine($"Name: {response.Name}");
Console.WriteLine($"Model: {response.Metadata.TunedModel} (Steps: {response.Metadata.TotalSteps})");

(This is still work in progress but operational. Future release will provide types to simplify the create request.)

Tuned models appear in your Google AI Studio library.

Tuned models are listed below My Library in Google AI Studio

Read more about Tune Gemini Pro in Google AI Studio or with the Gemini API.

More samples

The folders samples and tests contain more examples.

Troubleshooting ⚡

Sometimes you might have authentication warnings HTTP 403 (Forbidden). Especially while working with OAuth-based authentication. You can fix it by re-authenticating through ADC.

gcloud config set project "$PROJECT_ID"

gcloud auth application-default set-quota-project "$PROJECT_ID"
gcloud auth application-default login

Make sure that the required API have been enabled.

# ENABLE APIs
gcloud services enable aiplatform.googleapis.com

In case of long-running streaming requests it can happen that you get a HttpIOException: The response ended prematurely while waiting for the next frame from the server. (ResponseEnded). The root cause is the .NET runtime and the solution is to upgrade to the latest version of the .NET runtime. In case that you cannot upgrade you might disable dynamic window sizing as a workaround: Either using the environment variable DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP2FLOWCONTROL_DISABLEDYNAMICWINDOWSIZING

DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP2FLOWCONTROL_DISABLEDYNAMICWINDOWSIZING=true

or setting an AppContext switch:

AppContext.SetSwitch("System.Net.SocketsHttpHandler.Http2FlowControl.DisableDynamicWindowSizing", true);

Several issues regarding this problem have been reported on GitHub:

Using the tests 🧩

The repository contains a number of test cases for Google AI and Vertex AI. You will find them in the tests folder. They are part of the [GenerativeAI solution]. To run the tests, either enter the relevant information into the appsettings.json, create a new appsettings.user.json file with the same JSON structure in the tests folder, or define the following environment variables

  • GOOGLE_API_KEY
  • GOOGLE_PROJECT_ID
  • GOOGLE_REGION
  • GOOGLE_ACCESS_TOKEN (optional: if absent, gcloud auth application-default print-access-token is executed)

The test cases should provide more insights and use cases on how to use the Mscc.GenerativeAI package in your .NET projects.

Feedback ✨

For support and feedback kindly create issues at the https://github.com/mscraftsman/generative-ai repository.

License 📜

This project is licensed under the Apache-2.0 License - see the LICENSE file for details.

Citation 📚

If you use Mscc.GenerativeAI in your research project, kindly cite as follows

@misc{Mscc.GenerativeAI,
  author = {Kirstätter, J and MSCraftsman},
  title = {Mscc.GenerativeAI - Gemini AI Client for .NET and ASP.NET Core},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  note = {https://github.com/mscraftsman/generative-ai}
}

Created by Jochen Kirstätter.

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 is compatible.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (4)

Showing the top 4 NuGet packages that depend on Mscc.GenerativeAI:

Package Downloads
Mscc.GenerativeAI.Google

Gemini AI Client for .NET

Mscc.GenerativeAI.Web

A client for ASP.NET Core designed to consume Gemini AI.

fsEnsemble

Package Description

Mscc.GenerativeAI.Microsoft

Gemini AI Client for .NET

GitHub repositories (1)

Showing the top 1 popular GitHub repositories that depend on Mscc.GenerativeAI:

Repository Stars
JasonBock/Rocks
A mocking library based on the Compiler APIs (Roslyn + Mocks)
Version Downloads Last updated
1.9.6 64 11/26/2024
1.9.5 143 11/22/2024
1.9.4 87 11/21/2024
1.9.3 171 11/20/2024
1.9.2 358 11/18/2024
1.9.1 148 11/13/2024
1.9.0 363 11/4/2024
1.8.3 224 11/1/2024
1.8.2 99 10/31/2024
1.8.1 243 10/30/2024
1.8.0 180 10/29/2024
1.7.0 322 10/14/2024
1.6.5 403 10/13/2024
1.6.4 601 10/9/2024
1.6.3 663 9/24/2024
1.6.2 129 9/19/2024
1.6.1 208 9/18/2024
1.6.0 778 8/29/2024
1.5.1 367 7/31/2024
1.5.0 2,427 5/15/2024
1.4.0 370 4/22/2024
1.3.0 123 4/18/2024
1.2.0 115 4/16/2024
1.1.4 153 4/15/2024
1.1.3 115 4/12/2024
1.1.2 104 4/11/2024
1.1.1 1,855 4/10/2024
1.1.0 97 4/9/2024
1.0.1 280 4/1/2024
1.0.0 103 3/30/2024
0.9.4 286 3/29/2024
0.9.3 204 3/28/2024
0.9.1 197 3/26/2024
0.9.0 209 3/23/2024
0.8.4 195 3/21/2024
0.8.3 256 3/20/2024
0.8.2 211 3/20/2024
0.8.1 225 3/20/2024
0.8.0 220 3/20/2024
0.7.2 117 3/18/2024
0.7.1 104 3/18/2024
0.7.0 115 3/15/2024
0.6.1 450 3/11/2024
0.6.0 120 3/11/2024
0.5.4 132 3/7/2024
0.5.3 152 3/7/2024
0.5.2 119 3/6/2024
0.5.1 130 3/5/2024
0.5.0 162 3/5/2024
0.4.5 208 3/3/2024
0.4.4 127 3/1/2024
0.4.3 124 3/1/2024
0.4.2 123 3/1/2024
0.4.1 119 2/29/2024
0.3.2 116 2/29/2024
0.3.1 106 2/29/2024
0.2.1 118 2/29/2024

# Changelog (Release Notes)

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) (SemVer).

## [Unreleased]

### Added
- Feature suggestion: Retry mechanism ([#2](https://github.com/mscraftsman/generative-ai/issues/2))
- implement Automatic Function Call (AFC)
### Changed
### Fixed

## 1.9.5

### Added

- add model `gemini-exp-1121` - [#47](https://github.com/mscraftsman/generative-ai/issues/47) thanks to @doggy8088
- set package identifier as User-Agent and Google API Client

### Changed

- improve inheritance modifiers
- amend NuGet packages for .NET 6

## 1.9.4

### Added

- add LearnLM model `learnlm-1.5-pro-experimental`
- add Imagen3 on Google AI
- extend interface for Imagen Generation Model

### Changed

- guard initialisation of models/services (Google AI)

## 1.9.3

### Changed

- overwrite HTTP handling of API key
- mark properties as optional
- deserialize and return chat completions response

## 1.9.2

### Added

- add model `gemini-exp-1114` - [#45](https://github.com/mscraftsman/generative-ai/issues/45) thanks to @shankarvashist
- add models `gemini-1.5-flash-8b` and `gemini-1.5-flash-8b-latest`
- add services for `Chat`, `Embeddings`, and `OpenAI`
- add `EnableEnhancedCivicAnswers` property

## 1.9.1

### Changed

- update NuGet package(s)

## 1.9.0

### Added

- add .NET 9.0 targeting
- add feature: Interact with Vertex Tuned Models ([#36](https://github.com/mscraftsman/generative-ai/issues/36))
- add model/service for generated files
- add method(s) to call Predict endpoints

### Changed

- refactor handling of base URLs and API endpoints
- check request(s) for unsupported combination of options
- update method to batch embeddings

## 1.8.3

### Added

- add Grounding with Google Search
- add `ModelVersion` property

## 1.8.2

### Added

- new NuGet package `Mscc.GenerativeAI.Microsoft` leveraging Microsoft.Extensions.AI abstractions to build a unified AI client

### Changed

- set role for embedding request

### Fixed

- fix endpoint method of `text-embedding-004`

## 1.8.1

### Added

- add logs with LogLevel using the Standard logging in .NET ([#6](https://github.com/mscraftsman/generative-ai/issues/6)) - thanks @doggy8088

### Changed

- improve regarding XMLdoc, typos, and non-nullable properties

### Fixed

- fix Application Default Credentials (ADC) has been loaded automatically even I use API Key auth. [#9](https://github.com/mscraftsman/generative-ai/issues/9)
- fix Exception thrown in Google App Engine [#26](https://github.com/mscraftsman/generative-ai/issues/26)

## 1.8.0

### Added

- add context caching: https://ai.google.dev/gemini-api/docs/caching
- add code execution: https://ai.google.dev/gemini-api/docs/code-execution
- add model `gemini-1.5-flash-8b-001`
- add Logprobs handling
- add required model name and optional cached content to request

### Changed

- sanitize name of cached content
- extend list of supported MIME types
- extend `FinishReason`
- extend `VideoMetadata`

### Fixed

- disable HTTP/3 (Quic) due to issue [#34](https://github.com/mscraftsman/generative-ai/issues/34)

## 1.7.0

### Added

- add methods using File API to class `GoogleAI`
- mark methods using File API as obsolete/deprecated
- add types and functionality for `CachedContents`
- add types and functionality for `GeneratedFile`
- add extension methods for `GeneratedFiles` and `CachedContents`
- add more XMLdoc

### Changed

- change access modifier of some properties

## 1.6.5

### Added

- add properties `State`, `Error`, and `VideoMetadata` to type `FileResource`. [#33](https://github.com/mscraftsman/generative-ai/issues/33)
- overload method of `UploadMedia` to support stream types ([#38](https://github.com/mscraftsman/generative-ai/issues/38))

### Changed

- use of using expression to dispose `FileStream` after upload [#35](https://github.com/mscraftsman/generative-ai/pull/37) - thanks @rsmithsa
- enhance returned error information [#33](https://github.com/mscraftsman/generative-ai/issues/33)
- update enums according to $discovery
- sync target frameworks among projects

## 1.6.4

### Changed

- upgrade NuGet packages
- housekeeping

## 1.6.3

### Added

- add model `gemini-1.5-pro-002`
- add model `gemini-1.5-flash-002`
- add experimental model `gemini-1.5-flash-8b-exp-0924`

## 1.6.2

### Added

- add RequestOptions to override default values
- add ResponseSchema for JSON response mode

### Changed

- change default model to Gemini 1.5
- [.NET] use HTTP/1.1 or higher protocol

## 1.6.1

### Added

- add Imagen 3 model `imagen-3.0-generate-001`
- add Imagen 3 model `imagen-3.0-fast-generate-001`

## 1.6.0

### Added

- add tuning model `gemini-1.5-pro-001`
- add tuning model `gemini-1.5-flash-001`
- add tuning model `gemini-1.5-flash-001-tuning`
- add experimental model `gemini-1.5-pro-exp-0801`
- add experimental model `gemini-1.5-pro-exp-0827`
- add experimental model `gemini-1.5-flash-exp-0827`
- add experimental model `gemini-1.5-flash-8b-exp-0827`

### Changed

- removed targeting for .NET 7 (end of support)
- re-linked constant `Model.Gemini15Pro`
- re-linked constant `Model.Gemini15Flash`

## 1.5.2

### Added

- add model `gemini-1.5-flash-001`
- add model `gemini-1.5-flash-001-tuning`

## 1.5.1

### Changed

- Update System.Text.Json to 8.0.4

## 1.5.0

### Added

- add model `gemini-1.5-flash-latest`

## 1.4.0

### Added

- implement Imagen 2 model (Vertex AI)
- implement Image Captioning (Vertex AI)
- implement Visual question and answering (VQA)
- add tests for `ImageGenerationModel` and `ImageTextModel`

### Changed

- refactor constant mimetype
- improve XML doc
- move types to subfolder

## 1.3.0

### Added

- implement Server-Sent Events (SSE)
- add enum `FunctionCallingMode`
- implement type `ToolConfig`
- add model `gemini-1.0-pro-vision-001`
- implement exception for max file upload size
- expose `Timeout` property

### Changed

- rename method `UploadMedia` to `UploadFile` (in sync with other SDKs)
- rename `TaskType` Unspecified property
- refactor `FileResource.SizeBytes` to long data type (int64)
- refactor response type of `ListFiles` (discovery)
- streaming response using SSE format works for other models than gemini-pro (original limitation)
- specify default values for `pageSize`
- refactor constants to external file
- add and amend enum identifiers
- add and amend XML doc

## 1.2.0

### Added

- use TLS 1.2 protocol (.NET Fx)
- troubleshooting for streaming HttpIOException (.NET runtime issue)

### Changed

- improve writing of model name
- refactor Content type used for SystemInstruction
- update tests regarding Content type

### Fixed

- fix response checking in ChatSession

## 1.1.4

### Added

- new values in enum FinishReason
- new enum HarmBlockMethod

### Changed

- improve enums (ref: Google.Cloud.AIPlatform.V1)
- improve response in SSE format
- update samples to latest NuGet package

## 1.1.3

### Changed

- improve Grounding for Google Search and Vertex AI Search

### Fixed

- system instruction is an instance of content, not a list of same.

## 1.1.2

### Added

- test cases for FinishReason.MaxTokens

### Changed

- improve accessor of response.Text
- upgrade NuGet packages dependencies

## 1.1.1

### Fixed

- upload via File API (Display name was missing)

## 1.1.0

### Added

- implement JSON mode
- implement Grounding for Google Search and Vertex AI Search
- implement system instructions
- add model `text-embedding-004`
- add model `gemini-1.0-pro-002`
- add Audio / File API support

### Changed

- add tools collection
- generate XML docs

### Fixed

## 1.0.1

### Added

- implement part type of VideoMetadata
- enable Server Sent Events (SSE) for `gemini-1.0-pro`
- add models Gemini 1.5 Pro (FC patch, PIv5 and DI) and Gemini 1.0 Ultra

### Changed

- improve XML documentation
- remove/reduce snake_case JSON elements

## 1.0.0

### Added

- implement File API to support large files
- full support of Gemini 1.5 and Gemini 1.0 Ultra

### Changed

- improve XML documentation

## 0.9.4

### Added

- implement patching of tuned models in .NET Framework
- guard for unsupported features or API backend
- expose GetModel on IGenerative

### Changed

- extend XML documentation

### Fixed

- Assigning an API_KEY using model.ApiKey is not working ([#20](https://github.com/mscraftsman/generative-ai/issues/20))

## 0.9.3

### Changed

- apply default config and settings to request

### Fixed

- Fix a bug in Initialize_Model() test by Will @doggy8088 ([#13](https://github.com/mscraftsman/generative-ai/issues/13))
- Fixes ContentResponse class issue by Will @doggy8088 ([#16](https://github.com/mscraftsman/generative-ai/issues/16))
- ignore Text member in ContentResponse by Will @doggy8088 ([#14](https://github.com/mscraftsman/generative-ai/issues/14))

## 0.9.2

### Added

- models of Gemini 1.5 and Gemini 1.0 Ultra
- tests for Gemini 1.5 and Gemini 1.0 Ultra

## 0.9.1

### Added

- add interface IGenerativeAI
- simplify image/media handling in requests
- extend generateAnswer feature
- more tests for Gemini Pro Vision model
- add exceptions from API reference

### Changed

- improve creation of generative model in Google AI class
- SafetySettings can be easier and less error-prone. ([#8](https://github.com/mscraftsman/generative-ai/issues/8))
- remove _useHeaderApiKey ([#10](https://github.com/mscraftsman/generative-ai/issues/10]))

## 0.9.0

### Added

- compatibility methods for PaLM models

### Changed
### Fixed

## 0.8.4

### Added

- missing comments and better explanations
- add GoogleAI type ([#3](https://github.com/mscraftsman/generative-ai/issues/3))
- read environment variables in GoogleAI and VertexAI

## 0.8.3

### Added

- simplify creation of tuned model

### Fixed

- check of model for Tuning, Answering and Embedding

## 0.8.2

### Added

- ability to rewind chat history
- access text of content response easier

### Changed

- improve handling of chat history (streaming)

## 0.8.1

### Changed

- access modifier to avoid ambiguous type reference (ClientSecrets)

## 0.8.0

### Added

- implement tuned model patching (.NET 6 and higher only)
- implement transfer of ownership of tuned model
- implement batched Embeddings
- query string parameters to list models (pagination and filter support)
- type documentation
- generate a grounded answer
- constants for method names/endpoints
- enumeration of state of created tuned model

### Changed

- text prompts have `user` role assigned
- improve Embeddings
- refactor types according to API reference
- extend type documentation
- improve .NET targetting of source code

## 0.7.2

### Added

- delete tuned model

### Changed

- method to list models supports both - regular and tuned - model types

## 0.7.1

### Added

- implement model tuning (works with stable models only)
 - `text-bison-001`
 - `gemini-1.0-pro-001`
- tests for model tuning

### Changed

- improved authentication regarding API key or OAuth/ADC
- added scope https://www.googleapis.com/auth/generative-language.tuning
- harmonized version among NuGet packages
- provide empty response on Safety stop
- merged regular and tuned ModelResponse

## 0.7.0

### Added

- use Environment Variables for default values (parameterless constructor)
- support of .env file

### Changed

- improve Function Calling
- improve Chat streaming
- improve Embeddings

## 0.6.1

### Added

- implement Function Calling

## 0.6.0

### Added

- implement streaming of content
- support of HTTP/3 protocol
- specify JSON order of properties

### Changed

- improve handling of config and settings

## 0.5.4

### Added

- implement Embeddings
- brief sanity check on model selection

### Changed

- refactor handling of parts
- ⛳ allow configuration, safety settings and tools for Chat

## 0.5.3

### Added

- Implement Chat

## 0.5.2

### Added

- Use of enumerations

### Changed

- Correct JSON conversion of SafetySettings

## 0.5.1

### Added

- Handle GenerationConfig, SafetySeetings and Tools

### Changed

- Append streamGenerateContent

## 0.5.0

### Added

- Add NuGet package Mscc.GenerativeAI.Web for use with ASP.NET Core.

### Changed

- Refactor folder structure

## 0.4.5

### Changed

- Extend methods

## 0.4.4

### Added

- Automate package build process

## 0.4.3

### Added

- Add x-goog-api-key header

## 0.4.2

### Changed

- Minor correction

## 0.4.1

### Added

- Add OAuth to Google AI

## 0.3.2

### Changed

- Improve package attributes

## 0.3.1

### Added

- Add methods ListModels and GetModel

## 0.2.1

### Added

- Initial Release

## 0.1.2

### Changed

- Update README.md