Google_Gemini 0.10.2-dev.58

This is a prerelease version of Google_Gemini.
There is a newer prerelease version of this package available.
See the version list below for details.
dotnet add package Google_Gemini --version 0.10.2-dev.58
                    
NuGet\Install-Package Google_Gemini -Version 0.10.2-dev.58
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Google_Gemini" Version="0.10.2-dev.58" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Google_Gemini" Version="0.10.2-dev.58" />
                    
Directory.Packages.props
<PackageReference Include="Google_Gemini" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Google_Gemini --version 0.10.2-dev.58
                    
#r "nuget: Google_Gemini, 0.10.2-dev.58"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Google_Gemini@0.10.2-dev.58
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Google_Gemini&version=0.10.2-dev.58&prerelease
                    
Install as a Cake Addin
#tool nuget:?package=Google_Gemini&version=0.10.2-dev.58&prerelease
                    
Install as a Cake Tool

Google.Gemini

Nuget package dotnet License: MIT Discord

Features 🔥

  • Fully generated C# SDK based on official Google.Gemini OpenAPI specification using AutoSDK
  • Same day update to support new features
  • Updated and supported automatically if there are no breaking changes
  • All modern .NET features - nullability, trimming, NativeAOT, etc.
  • Support .Net Framework/.Net Standard 2.0
  • Microsoft.Extensions.AI IChatClient and IEmbeddingGenerator support

Usage

using Google.Gemini;

using var client = new GeminiClient(apiKey);

Microsoft.Extensions.AI

The SDK implements IChatClient and IEmbeddingGenerator:

using Google.Gemini;
using Microsoft.Extensions.AI;

// IChatClient
IChatClient chatClient = new GeminiClient(apiKey);
var response = await chatClient.GetResponseAsync(
    [new ChatMessage(ChatRole.User, "Hello!")],
    new ChatOptions { ModelId = "gemini-2.0-flash" });

// IEmbeddingGenerator
IEmbeddingGenerator<string, Embedding<float>> generator = new GeminiClient(apiKey);
var embeddings = await generator.GenerateAsync(
    ["Hello, world!"],
    new EmbeddingGenerationOptions { ModelId = "gemini-embedding-001" });

Live API (Real-time Voice/Video)

The SDK supports the Gemini Live API for real-time bidirectional voice and video interactions over WebSocket:

using Google.Gemini;

using var client = new GeminiClient(apiKey);

// Connect to the Live API
await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
});

// Send text and receive audio responses
await session.SendTextAsync("Hello, how are you?");

await foreach (var message in session.ReadEventsAsync())
{
    // Audio data in message.ServerContent.ModelTurn.Parts[].InlineData
    if (message.ServerContent?.TurnComplete == true)
        break;
}

Voice selection and speech config:

await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
        SpeechConfig = new SpeechConfig
        {
            VoiceConfig = new VoiceConfig
            {
                PrebuiltVoiceConfig = new PrebuiltVoiceConfig
                {
                    VoiceName = "Kore", // Aoede, Charon, Fenrir, Kore, Puck, etc.
                },
            },
        },
    },
});

Multi-turn conversation:

// Send conversation history before triggering a response
await session.SendClientContentAsync(
    turns:
    [
        new Content
        {
            Role = "user",
            Parts = [new Part { Text = "My name is Alice" }],
        },
        new Content
        {
            Role = "model",
            Parts = [new Part { Text = "Nice to meet you, Alice!" }],
        },
        new Content
        {
            Role = "user",
            Parts = [new Part { Text = "What's my name?" }],
        },
    ],
    turnComplete: true);

System instruction (customize model behavior):

await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    SystemInstruction = new Content
    {
        Parts = [new Part { Text = "You are a friendly pirate. Always respond in pirate speak." }],
    },
});

Tool calling:

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    Tools = [new Tool { FunctionDeclarations = [myFunction] }],
};

await using var session = await client.ConnectLiveAsync(config);
await session.SendTextAsync("What's the weather in London?");

await foreach (var message in session.ReadEventsAsync())
{
    if (message.ToolCall is { } toolCall)
    {
        // Handle function call and send response
        await session.SendToolResponseAsync([new FunctionResponse
        {
            Name = toolCall.FunctionCalls![0].Name,
            Id = toolCall.FunctionCalls[0].Id,
            Response = new { temperature = "15C" },
        }]);
    }

    // Tool calls cancelled due to user interruption
    if (message.ToolCallCancellation is { } cancellation)
    {
        Console.WriteLine($"Tool calls cancelled: {string.Join(", ", cancellation.Ids!)}");
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Session resumption (reconnect without losing context):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    SessionResumption = new LiveSessionResumptionConfig(),
};

await using var session1 = await client.ConnectLiveAsync(config);
// ... interact ...
var handle = session1.LastSessionResumptionHandle;

// Later, reconnect with the handle
var config2 = new LiveSetupConfig
{
    // ... same config ...
    SessionResumption = new LiveSessionResumptionConfig { Handle = handle },
};
await using var session2 = await client.ConnectLiveAsync(config2);

Output transcription (get text alongside audio responses):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    OutputAudioTranscription = new LiveOutputAudioTranscription(),
};

await using var session = await client.ConnectLiveAsync(config);
await session.SendTextAsync("Tell me a joke");

await foreach (var message in session.ReadEventsAsync())
{
    // Text transcription of the audio response
    if (message.ServerContent?.OutputTranscription?.Text is { } text)
        Console.Write(text);

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Send audio/video:

// Send PCM audio (16-bit, 16kHz, little-endian, mono)
await session.SendAudioAsync(pcmBytes);

// Send audio with custom MIME type
await session.SendAudioAsync(audioBytes, "audio/pcm;rate=24000");

// Send video frame
await session.SendVideoAsync(jpegBytes, "image/jpeg");

// Stream video frames in a loop
foreach (var frame in videoFrames)
{
    await session.SendVideoAsync(frame, "image/jpeg");
    await Task.Delay(100); // ~10 fps
}

Auto-reconnect on GoAway (resilient sessions):

// ResilientLiveSession automatically reconnects when the server sends GoAway
await using var session = await client.ConnectResilientLiveAsync(new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
});

session.GoAwayReceived += (sender, goAway) =>
    Console.WriteLine($"Server closing in {goAway.TimeLeft}, reconnecting...");
session.Reconnected += (sender, _) =>
    Console.WriteLine("Reconnected successfully!");

await session.SendTextAsync("Hello!");

// Events keep flowing transparently across reconnections
await foreach (var message in session.ReadEventsAsync())
{
    if (message.ServerContent?.TurnComplete == true)
        break;
}

Input audio transcription (get text for audio you send):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    InputAudioTranscription = new LiveInputAudioTranscription(),
};

await using var session = await client.ConnectLiveAsync(config);
await session.SendAudioAsync(pcmBytes);
await session.SendClientContentAsync(turns: [], turnComplete: true);

await foreach (var message in session.ReadEventsAsync())
{
    // Text transcription of the audio you sent
    if (message.ServerContent?.InputTranscription?.Text is { } text)
        Console.Write($"[You said: {text}]");

    if (message.ServerContent?.TurnComplete == true)
        break;
}

<details> <summary><b>Advanced features</b> (compression, interruption, usage, GoAway, audio round-trip)</summary>

Context window compression (for longer sessions):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
    ContextWindowCompression = new LiveContextWindowCompression
    {
        SlidingWindow = new LiveSlidingWindow
        {
            TargetTokens = 1024, // tokens to retain after compression
        },
    },
};

Interruption handling (user speaks during model response):

await foreach (var message in session.ReadEventsAsync())
{
    if (message.ServerContent?.Interrupted == true)
    {
        // Model response was cut short — user started speaking
        Console.WriteLine("Model interrupted by user input");
    }

    if (message.ServerContent?.ModelTurn?.Parts is { } parts)
    {
        foreach (var part in parts)
        {
            // Process audio/text parts (may be partial if interrupted)
            if (part.InlineData?.Data is { } audioData)
                PlayAudio(audioData);
        }
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Usage metadata (track token consumption):

await foreach (var message in session.ReadEventsAsync())
{
    if (message.UsageMetadata is { } usage)
    {
        Console.WriteLine($"Prompt tokens: {usage.PromptTokenCount}");
        Console.WriteLine($"Response tokens: {usage.CandidatesTokenCount}");
        Console.WriteLine($"Total tokens: {usage.TotalTokenCount}");
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

GoAway handling (graceful session migration):

await foreach (var message in session.ReadEventsAsync())
{
    if (message.GoAway is { } goAway)
    {
        // Server is closing soon — reconnect using session resumption
        Console.WriteLine($"Server closing in {goAway.TimeLeft}, reconnecting...");
        break; // dispose session and reconnect with resumption handle
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

Audio round-trip (send and receive audio):

var config = new LiveSetupConfig
{
    Model = "models/gemini-2.5-flash-native-audio-latest",
    GenerationConfig = new GenerationConfig
    {
        ResponseModalities = [GenerationConfigResponseModalitie.Audio],
    },
};

await using var session = await client.ConnectLiveAsync(config);

// Send PCM audio (16-bit, 16kHz, little-endian, mono)
await session.SendAudioAsync(pcmBytes);

// Signal end of user turn
await session.SendClientContentAsync(turns: [], turnComplete: true);

// Receive audio response
await foreach (var message in session.ReadEventsAsync())
{
    if (message.ServerContent?.ModelTurn?.Parts is { } parts)
    {
        foreach (var part in parts)
        {
            if (part.InlineData?.Data is { } audioData)
            {
                // audioData is base64-decoded PCM audio (24kHz)
                PlayAudio(audioData);
            }
        }
    }

    if (message.ServerContent?.TurnComplete == true)
        break;
}

</details>

Chat Client Five Random Words Streaming

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    IChatClient chatClient = client;
    var updates = chatClient.GetStreamingResponseAsync(
        messages:
        [
            new ChatMessage(ChatRole.User, "Generate 5 random words.")
        ],
        options: new ChatOptions
        {
            ModelId = modelId,
        });

    var deltas = new List<string>();
    await foreach (var update in updates)
    {
        if (!string.IsNullOrWhiteSpace(update.Text))
        {
            deltas.Add(update.Text);
        }
    }

    // In streaming mode, rate limiting may not throw ApiException but instead
    // return empty/truncated data. Treat empty results as inconclusive.
    if (deltas.Count == 0)
    {
        return;
    }

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Chat Client Five Random Words

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    IChatClient chatClient = client;
    var response = await chatClient.GetResponseAsync(
        messages:
        [
            new ChatMessage(ChatRole.User, "Generate 5 random words.")
        ],
        options: new ChatOptions
        {
            ModelId = modelId,
        });

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Chat Client Get Service Returns Chat Client Metadata

using var client = CreateTestClient();
IChatClient chatClient = client;

var metadata = chatClient.GetService<ChatClientMetadata>();

Chat Client Get Service Returns Null For Unknown Key

using var client = CreateTestClient();
IChatClient chatClient = client;

var result = chatClient.GetService<ChatClientMetadata>(serviceKey: "unknown");

Chat Client Get Service Returns Self

using var client = CreateTestClient();
IChatClient chatClient = client;

var self = chatClient.GetService<GeminiClient>();

Chat Client Tool Calling Multi Turn

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    var getWeatherTool = AIFunctionFactory.Create(
        (string location) => $"The weather in {location} is 72°F and sunny.",
        name: "get_weather",
        description: "Gets the current weather for a given location.");

    IChatClient chatClient = client;
    var messages = new List<ChatMessage>
    {
        new(ChatRole.User, "What is the weather in Paris?"),
    };

    // First turn: model requests tool call
    var response = await chatClient.GetResponseAsync(
        messages,
        new ChatOptions
        {
            ModelId = modelId,
            Tools = [getWeatherTool],
        });

    var functionCall = response.Messages
        .SelectMany(m => m.Contents)
        .OfType<FunctionCallContent>()
        .FirstOrDefault();

    // Verify thought signature is preserved on function call content
    // (Gemini API requires it to be echoed back in subsequent turns)
    if (functionCall!.AdditionalProperties?.TryGetValue("gemini.thoughtSignature", out var sig) == true)
    {
    }

    // Add assistant message with function call and tool result
    messages.AddRange(response.Messages);
    var toolResult = await getWeatherTool.InvokeAsync(
        functionCall.Arguments is { } args ? new AIFunctionArguments(args) : null);
    messages.Add(new ChatMessage(ChatRole.Tool,
    [
        new FunctionResultContent(functionCall.CallId, toolResult),
    ]));

    // Second turn: model should produce a final text response
    // (this verifies the thought signature round-trip works — the API
    // rejects requests with missing thought signatures)
    var finalResponse = await chatClient.GetResponseAsync(
        messages,
        new ChatOptions
        {
            ModelId = modelId,
            Tools = [getWeatherTool],
        });

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Chat Client Tool Calling Single Turn

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    var getWeatherTool = AIFunctionFactory.Create(
        (string location) => $"The weather in {location} is 72°F and sunny.",
        name: "get_weather",
        description: "Gets the current weather for a given location.");

    IChatClient chatClient = client;
    var response = await chatClient.GetResponseAsync(
        [
            new ChatMessage(ChatRole.User, "What is the weather in Paris?")
        ],
        new ChatOptions
        {
            ModelId = modelId,
            Tools = [getWeatherTool],
        });

    // The model should request a tool call
    var functionCall = response.Messages
        .SelectMany(m => m.Contents)
        .OfType<FunctionCallContent>()
        .FirstOrDefault();

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Chat Client Tool Calling Streaming

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    var getWeatherTool = AIFunctionFactory.Create(
        (string location) => $"The weather in {location} is 72°F and sunny.",
        name: "get_weather",
        description: "Gets the current weather for a given location.");

    IChatClient chatClient = client;
    var updates = chatClient.GetStreamingResponseAsync(
        [
            new ChatMessage(ChatRole.User, "What is the weather in Paris?")
        ],
        new ChatOptions
        {
            ModelId = modelId,
            Tools = [getWeatherTool],
        });

    // Collect all streaming updates
    var functionCalls = new List<FunctionCallContent>();
    await foreach (var update in updates)
    {
        functionCalls.AddRange(update.Contents.OfType<FunctionCallContent>());
    }

    // In streaming mode, rate limiting may not throw ApiException but instead
    // return empty/truncated data. Treat empty results as inconclusive.
    if (functionCalls.Count == 0)
    {
        return;
    }

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Count Tokens Multiple Messages

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    var response = await client.ModelsCountTokensAsync(
        modelsId: modelId,
        contents:
        [
            new Content
            {
                Role = "user",
                Parts = [new Part { Text = "What is the meaning of life?" }],
            },
            new Content
            {
                Role = "model",
                Parts = [new Part { Text = "The meaning of life is a philosophical question." }],
            },
            new Content
            {
                Role = "user",
                Parts = [new Part { Text = "Can you elaborate?" }],
            },
        ]);

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Count Tokens Simple Text

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    var response = await client.ModelsCountTokensAsync(
        modelsId: modelId,
        contents:
        [
            new Content
            {
                Parts = [new Part { Text = "Hello, world! This is a test of token counting." }],
            },
        ]);

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Edit Image Simple Edit

using var client = new GeminiClient(apiKey);

try
{
    // First generate an image to edit
    var original = await client.GenerateImageAsync(
        prompt: "A plain white background",
        imageSize: "1K");

    var edited = await client.EditImageAsync(
        prompt: "Add a red circle in the center",
        imageData: original.ImageData!,
        mimeType: original.MimeType ?? "image/png",
        imageSize: "1K");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Embedding Generator Batch Input

using var client = new GeminiClient(apiKey);
var modelId = GetEmbeddingModelId();

try
{
    IEmbeddingGenerator<string, Embedding<float>> generator = client;
    var result = await generator.GenerateAsync(
        values: ["Hello, world!", "Goodbye, world!"],
        options: new EmbeddingGenerationOptions
        {
            ModelId = modelId,
        });

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Embedding Generator Get Service Returns Embedding Generator Metadata

using var client = CreateTestClient();
IEmbeddingGenerator<string, Embedding<float>> generator = client;

var metadata = generator.GetService<EmbeddingGeneratorMetadata>();

Embedding Generator Get Service Returns Null For Unknown Key

using var client = CreateTestClient();
IEmbeddingGenerator<string, Embedding<float>> generator = client;

var result = generator.GetService<EmbeddingGeneratorMetadata>(serviceKey: "unknown");

Embedding Generator Get Service Returns Self

using var client = CreateTestClient();
IEmbeddingGenerator<string, Embedding<float>> generator = client;

var self = generator.GetService<GeminiClient>();

Embedding Generator Single Input

using var client = new GeminiClient(apiKey);
var modelId = GetEmbeddingModelId();

try
{
    IEmbeddingGenerator<string, Embedding<float>> generator = client;
    var result = await generator.GenerateAsync(
        values: ["Hello, world!"],
        options: new EmbeddingGenerationOptions
        {
            ModelId = modelId,
        });

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Embedding Generator Task Type

using var client = new GeminiClient(apiKey);
var modelId = GetEmbeddingModelId();

try
{
    IEmbeddingGenerator<string, Embedding<float>> generator = client;

    // Use RETRIEVAL_QUERY task type for search queries
    var queryResult = await generator.GenerateAsync(
        values: ["How do I reset my password?"],
        options: new EmbeddingGenerationOptions
        {
            ModelId = modelId,
            AdditionalProperties = new AdditionalPropertiesDictionary
            {
                ["TaskType"] = "RETRIEVAL_QUERY",
            },
        });

    // Use RETRIEVAL_DOCUMENT task type with a Title for documents
    var docResult = await generator.GenerateAsync(
        values: ["To reset your password, go to Settings > Security > Change Password."],
        options: new EmbeddingGenerationOptions
        {
            ModelId = modelId,
            AdditionalProperties = new AdditionalPropertiesDictionary
            {
                ["TaskType"] = "RETRIEVAL_DOCUMENT",
                ["Title"] = "Password Reset Guide",
            },
        });

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

File Management

using var client = new GeminiClient(apiKey);

var response = await client.FilesListAsync();

// Should return a valid response even if no files exist

Generate Content

using var client = new GeminiClient(apiKey);
var modelId = GetGenerateContentModelId();

try
{
    Console.WriteLine($"Using model: {modelId}");

    GenerateContentResponse response = await client.ModelsGenerateContentAsync(
        modelsId: modelId,
        contents: [
            new Content
            {
                Parts = [
                    new Part
                    {
                        Text = "Generate 5 random words",
                    },
                ],
                Role = "user",
            },
        ],
        generationConfig: new GenerationConfig(),
        safetySettings: new List<SafetySetting>());

    Console.WriteLine(response.Candidates?[0].Content?.Parts?[0].Text);

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Generate Image Simple Prompt

using var client = new GeminiClient(apiKey);

try
{
    var result = await client.GenerateImageAsync(
        prompt: "A simple red circle on a white background",
        imageSize: "1K");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Generate Image With Aspect Ratio

using var client = new GeminiClient(apiKey);

try
{
    var result = await client.GenerateImageAsync(
        prompt: "A landscape with mountains",
        imageSize: "1K",
        aspectRatio: "16:9");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Generate Image With References Single Reference

using var client = new GeminiClient(apiKey);

try
{
    // First generate a reference image
    var reference = await client.GenerateImageAsync(
        prompt: "A simple red square",
        imageSize: "1K");

    var result = await client.GenerateImageWithReferencesAsync(
        prompt: "Create a similar shape but in blue",
        referenceImages: [(reference.ImageData!, reference.MimeType ?? "image/png")],
        imageSize: "1K");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}

Generate Video Simple Prompt

using var client = new GeminiClient(apiKey);

try
{
    var result = await client.GenerateVideoAsync(
        prompt: "A serene beach at sunset with gentle waves");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.BadRequest)
{
    // VIDEO modality may not be supported in generateContent endpoint yet
}

Generate Video From Image Simple Animation

using var client = new GeminiClient(apiKey);

try
{
    var image = await client.GenerateImageAsync(
        prompt: "A still landscape with mountains and a lake",
        imageSize: "1K");

    var result = await client.GenerateVideoFromImageAsync(
        prompt: "Animate the clouds moving slowly across the sky",
        imageData: image.ImageData!,
        mimeType: image.MimeType ?? "image/png");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.BadRequest)
{
}

Interpolate Frames Two Images

using var client = new GeminiClient(apiKey);

try
{
    var startFrame = await client.GenerateImageAsync(
        prompt: "A red circle on a white background",
        imageSize: "1K");

    var endFrame = await client.GenerateImageAsync(
        prompt: "A blue square on a white background",
        imageSize: "1K");

    var result = await client.InterpolateFramesAsync(
        startFrame: startFrame.ImageData!,
        endFrame: endFrame.ImageData!,
        prompt: "Smoothly transition between the two shapes");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.BadRequest)
{
}

List Models

using var client = new GeminiClient(apiKey);

ListModelsResponse response = await client.ModelsListAsync();

foreach (var model in response.Models ?? [])
{
    Console.WriteLine(model.Name);
}

Speak Different Voice

using var client = new GeminiClient(apiKey);

try
{
    var result = await client.SpeakAsync(
        text: "Say calmly: Testing with a different voice. This should sound professional.",
        voiceName: "Kore");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.BadRequest &&
                              ex.Message.Contains("only be used for TTS"))
{
}

Speak Simple Text

using var client = new GeminiClient(apiKey);

try
{
    var result = await client.SpeakAsync(
        text: "Say cheerfully: Hello, this is a test of text to speech. Have a wonderful day!",
        voiceName: "Puck");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.BadRequest &&
                              ex.Message.Contains("only be used for TTS"))
{
}

Transcribe Generated Audio

using var client = new GeminiClient(apiKey);

try
{
    // First generate audio to transcribe
    var audio = await client.SpeakAsync(
        text: "Read aloud: The quick brown fox jumps over the lazy dog.");

    var transcription = await client.TranscribeAsync(
        audioData: audio.AudioData!,
        mimeType: audio.MimeType ?? "audio/wav");

}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.TooManyRequests)
{
}
catch (ApiException ex) when (ex.StatusCode is System.Net.HttpStatusCode.BadRequest &&
                              ex.Message.Contains("only be used for TTS"))
{
}

Embedding Models

Model Dimensions Description
gemini-embedding-001 768 (default) Stable text embedding model
gemini-embedding-2-preview 3072 (default) Latest multimodal model — text, images, video, audio, PDFs. Matryoshka dimensions support

The SDK defaults to gemini-embedding-001. For best retrieval quality, use gemini-embedding-2-preview (note: embedding spaces are incompatible between the two models). See Google's embedding guide for details.

API Version

This SDK targets the v1beta API, which is the full-featured version used by Google's own SDKs (Python, JS, Go). The v1 (stable) API only exposes ~30 of the 70+ available endpoints and lacks critical features like tool calling, file upload, context caching, and grounding.

Support

Priority place for bugs: https://github.com/tryAGI/Google_Generative_AI/issues Priority place for ideas and general questions: https://github.com/tryAGI/Google_Generative_AI/discussions Discord: https://discord.gg/Ca2xhfBf3v

Acknowledgments

JetBrains logo

This project is supported by JetBrains through the Open Source Support Program.

CodeRabbit logo

This project is supported by CodeRabbit through the Open Source Support Program.

Product Compatible and additional computed target framework versions.
.NET net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (1)

Showing the top 1 NuGet packages that depend on Google_Gemini:

Package Downloads
LangChain

LangChain meta-package with the most used things.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
0.10.2-dev.85 2 3/22/2026
0.10.2-dev.64 27 3/20/2026
0.10.2-dev.58 28 3/20/2026
0.10.2-dev.57 27 3/20/2026
0.10.2-dev.55 24 3/19/2026
0.10.2-dev.54 26 3/19/2026
0.10.2-dev.53 22 3/19/2026
0.10.2-dev.52 22 3/19/2026
0.10.2-dev.51 28 3/19/2026
0.10.2-dev.44 34 3/19/2026
0.10.2-dev.43 27 3/19/2026
0.10.2-dev.42 23 3/19/2026
0.10.2-dev.41 22 3/19/2026
0.10.2-dev.40 26 3/19/2026
0.10.2-dev.39 26 3/19/2026
0.10.2-dev.38 27 3/19/2026
0.10.2-dev.37 25 3/19/2026
0.10.2-dev.36 27 3/19/2026
0.10.2-dev.35 23 3/19/2026
0.10.1 94 3/18/2026
Loading failed