Google_Gemini 0.10.2-dev.44

This is a prerelease version of Google_Gemini.
There is a newer prerelease version of this package available.
See the version list below for details.
dotnet add package Google_Gemini --version 0.10.2-dev.44
                    
NuGet\Install-Package Google_Gemini -Version 0.10.2-dev.44
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Google_Gemini" Version="0.10.2-dev.44" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Google_Gemini" Version="0.10.2-dev.44" />
                    
Directory.Packages.props
<PackageReference Include="Google_Gemini" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Google_Gemini --version 0.10.2-dev.44
                    
#r "nuget: Google_Gemini, 0.10.2-dev.44"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Google_Gemini@0.10.2-dev.44
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Google_Gemini&version=0.10.2-dev.44&prerelease
                    
Install as a Cake Addin
#tool nuget:?package=Google_Gemini&version=0.10.2-dev.44&prerelease
                    
Install as a Cake Tool

Google.Gemini

Nuget package dotnet License: MIT Discord

Features 🔥

  • Fully generated C# SDK based on official Google.Gemini OpenAPI specification using AutoSDK
  • Same day update to support new features
  • Updated and supported automatically if there are no breaking changes
  • All modern .NET features - nullability, trimming, NativeAOT, etc.
  • Support .Net Framework/.Net Standard 2.0
  • Microsoft.Extensions.AI IChatClient and IEmbeddingGenerator support

Usage

using Google.Gemini;

using var client = new GeminiClient(apiKey);

Microsoft.Extensions.AI

The SDK implements IChatClient and IEmbeddingGenerator:

using Google.Gemini;
using Microsoft.Extensions.AI;

// IChatClient
IChatClient chatClient = new GeminiClient(apiKey);
var response = await chatClient.GetResponseAsync(
    [new ChatMessage(ChatRole.User, "Hello!")],
    new ChatOptions { ModelId = "gemini-2.0-flash" });

// IEmbeddingGenerator
IEmbeddingGenerator<string, Embedding<float>> generator = new GeminiClient(apiKey);
var embeddings = await generator.GenerateAsync(
    ["Hello, world!"],
    new EmbeddingGenerationOptions { ModelId = "gemini-embedding-001" });

Live API (Real-time Voice/Video)

The SDK supports the Gemini Live API for real-time bidirectional voice and video interactions over WebSocket:

using Google.Gemini;

using var client = new GeminiClient(apiKey);

// Connect to the Live API
await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
});

// Send text and receive audio responses
await session.SendTextAsync("Hello, how are you?");

await foreach (var message in session.ReadEventsAsync())
{
    // Audio data in message.ServerContent.ModelTurn.Parts[].InlineData
    if (message.ServerContent?.TurnComplete == true)
        break;
}

Voice selection and speech config:

await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
        SpeechConfig = new SpeechConfig
        {
            VoiceConfig = new VoiceConfig
            {
                PrebuiltVoiceConfig = new PrebuiltVoiceConfig
                {
                    VoiceName = "Kore", // Aoede, Charon, Fenrir, Kore, Puck, etc.
                },
            },
        },
    },
});

Multi-turn conversation:

// Send conversation history before triggering a response
await session.SendClientContentAsync(
    turns:
    [
        new Content
        {
            Role = "user",
            Parts = [new Part { Text = "My name is Alice" }],
        },
        new Content
        {
            Role = "model",
            Parts = [new Part { Text = "Nice to meet you, Alice!" }],
        },
        new Content
        {
            Role = "user",
            Parts = [new Part { Text = "What's my name?" }],
        },
    ],
    turnComplete: true);

System instruction (customize model behavior):

await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    SystemInstruction = new Content
    {
        Parts = [new Part { Text = "You are a friendly pirate. Always respond in pirate speak." }],
    },
});

Tool calling:

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    Tools = [new Tool { FunctionDeclarations = [myFunction] }],
};

await using var session = await client.ConnectLiveAsync(config);
await session.SendTextAsync("What's the weather in London?");

await foreach (var message in session.ReadEventsAsync())
{
    if (message.ToolCall is { } toolCall)
    {
        // Handle function call and send response
        await session.SendToolResponseAsync([new FunctionResponse
        {
            Name = toolCall.FunctionCalls![0].Name,
            Id = toolCall.FunctionCalls[0].Id,
            Response = new { temperature = "15C" },
        }]);
    }

    // Tool calls cancelled due to user interruption
    if (message.ToolCallCancellation is { } cancellation)
    {
        Console.WriteLine($"Tool calls cancelled: {string.Join(", ", cancellation.Ids!)}");
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Session resumption (reconnect without losing context):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    SessionResumption = new LiveSessionResumptionConfig(),
};

await using var session1 = await client.ConnectLiveAsync(config);
// ... interact ...
var handle = session1.LastSessionResumptionHandle;

// Later, reconnect with the handle
var config2 = new LiveSetupConfig
{
    // ... same config ...
    SessionResumption = new LiveSessionResumptionConfig { Handle = handle },
};
await using var session2 = await client.ConnectLiveAsync(config2);

Output transcription (get text alongside audio responses):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    OutputAudioTranscription = new LiveOutputAudioTranscription(),
};

await using var session = await client.ConnectLiveAsync(config);
await session.SendTextAsync("Tell me a joke");

await foreach (var message in session.ReadEventsAsync())
{
    // Text transcription of the audio response
    if (message.ServerContent?.OutputTranscription?.Text is { } text)
        Console.Write(text);

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Send audio/video:

// Send PCM audio (16-bit, 16kHz, little-endian, mono)
await session.SendAudioAsync(pcmBytes);

// Send audio with custom MIME type
await session.SendAudioAsync(audioBytes, "audio/pcm;rate=24000");

// Send video frame
await session.SendVideoAsync(jpegBytes, "image/jpeg");

// Stream video frames in a loop
foreach (var frame in videoFrames)
{
    await session.SendVideoAsync(frame, "image/jpeg");
    await Task.Delay(100); // ~10 fps
}

<details> <summary><b>Advanced features</b> (compression, interruption, usage, GoAway, audio round-trip)</summary>

Context window compression (for longer sessions):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    ContextWindowCompression = new LiveContextWindowCompression
    {
        SlidingWindow = new LiveSlidingWindow
        {
            TargetTokens = 1024, // tokens to retain after compression
        },
    },
};

Interruption handling (user speaks during model response):

await foreach (var message in session.ReadEventsAsync())
{
    if (message.ServerContent?.Interrupted == true)
    {
        // Model response was cut short — user started speaking
        Console.WriteLine("Model interrupted by user input");
    }

    if (message.ServerContent?.ModelTurn?.Parts is { } parts)
    {
        foreach (var part in parts)
        {
            // Process audio/text parts (may be partial if interrupted)
            if (part.InlineData?.Data is { } audioData)
                PlayAudio(audioData);
        }
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Usage metadata (track token consumption):

await foreach (var message in session.ReadEventsAsync())
{
    if (message.UsageMetadata is { } usage)
    {
        Console.WriteLine($"Prompt tokens: {usage.PromptTokenCount}");
        Console.WriteLine($"Response tokens: {usage.CandidatesTokenCount}");
        Console.WriteLine($"Total tokens: {usage.TotalTokenCount}");
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

GoAway handling (graceful session migration):

await foreach (var message in session.ReadEventsAsync())
{
    if (message.GoAway is { } goAway)
    {
        // Server is closing soon — reconnect using session resumption
        Console.WriteLine($"Server closing in {goAway.TimeLeft}, reconnecting...");
        break; // dispose session and reconnect with resumption handle
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Audio round-trip (send and receive audio):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
};

await using var session = await client.ConnectLiveAsync(config);

// Send PCM audio (16-bit, 16kHz, little-endian, mono)
await session.SendAudioAsync(pcmBytes);

// Signal end of user turn
await session.SendClientContentAsync(turns: [], turnComplete: true);

// Receive audio response
await foreach (var message in session.ReadEventsAsync())
{
    if (message.ServerContent?.ModelTurn?.Parts is { } parts)
    {
        foreach (var part in parts)
        {
            if (part.InlineData?.Data is { } audioData)
            {
                // audioData is base64-decoded PCM audio (24kHz)
                PlayAudio(audioData);
            }
        }
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

</details>

Embedding Models

Model Dimensions Description
gemini-embedding-001 768 (default) Stable text embedding model
gemini-embedding-2-preview 3072 (default) Latest multimodal model — text, images, video, audio, PDFs. Matryoshka dimensions support

The SDK defaults to gemini-embedding-001. For best retrieval quality, use gemini-embedding-2-preview (note: embedding spaces are incompatible between the two models). See Google's embedding guide for details.

API Version

This SDK targets the v1beta API, which is the full-featured version used by Google's own SDKs (Python, JS, Go). The v1 (stable) API only exposes ~30 of the 70+ available endpoints and lacks critical features like tool calling, file upload, context caching, and grounding.

Support

Priority place for bugs: https://github.com/tryAGI/Google_Generative_AI/issues Priority place for ideas and general questions: https://github.com/tryAGI/Google_Generative_AI/discussions Discord: https://discord.gg/Ca2xhfBf3v

Acknowledgments

JetBrains logo

This project is supported by JetBrains through the Open Source Support Program.

CodeRabbit logo

This project is supported by CodeRabbit through the Open Source Support Program.

Product Compatible and additional computed target framework versions.
.NET net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (1)

Showing the top 1 NuGet packages that depend on Google_Gemini:

Package Downloads
LangChain

LangChain meta-package with the most used things.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
0.10.2-dev.85 2 3/22/2026
0.10.2-dev.64 27 3/20/2026
0.10.2-dev.58 28 3/20/2026
0.10.2-dev.57 27 3/20/2026
0.10.2-dev.55 24 3/19/2026
0.10.2-dev.54 26 3/19/2026
0.10.2-dev.53 22 3/19/2026
0.10.2-dev.52 22 3/19/2026
0.10.2-dev.51 28 3/19/2026
0.10.2-dev.44 34 3/19/2026
0.10.2-dev.43 27 3/19/2026
0.10.2-dev.42 23 3/19/2026
0.10.2-dev.41 22 3/19/2026
0.10.2-dev.40 26 3/19/2026
0.10.2-dev.39 26 3/19/2026
0.10.2-dev.38 27 3/19/2026
0.10.2-dev.37 25 3/19/2026
0.10.2-dev.36 27 3/19/2026
0.10.2-dev.35 23 3/19/2026
0.10.1 94 3/18/2026
Loading failed