Google_Gemini 0.10.2-dev.57
See the version list below for details.
dotnet add package Google_Gemini --version 0.10.2-dev.57
NuGet\Install-Package Google_Gemini -Version 0.10.2-dev.57
<PackageReference Include="Google_Gemini" Version="0.10.2-dev.57" />
<PackageVersion Include="Google_Gemini" Version="0.10.2-dev.57" />
<PackageReference Include="Google_Gemini" />
paket add Google_Gemini --version 0.10.2-dev.57
#r "nuget: Google_Gemini, 0.10.2-dev.57"
#:package Google_Gemini@0.10.2-dev.57
#addin nuget:?package=Google_Gemini&version=0.10.2-dev.57&prerelease
#tool nuget:?package=Google_Gemini&version=0.10.2-dev.57&prerelease
Google.Gemini
Features 🔥
- Fully generated C# SDK based on official Google.Gemini OpenAPI specification using AutoSDK
- Same day update to support new features
- Updated and supported automatically if there are no breaking changes
- All modern .NET features - nullability, trimming, NativeAOT, etc.
- Support .Net Framework/.Net Standard 2.0
- Microsoft.Extensions.AI
IChatClientandIEmbeddingGeneratorsupport
Usage
using Google.Gemini;
using var client = new GeminiClient(apiKey);
Microsoft.Extensions.AI
The SDK implements IChatClient and IEmbeddingGenerator:
using Google.Gemini;
using Microsoft.Extensions.AI;
// IChatClient
IChatClient chatClient = new GeminiClient(apiKey);
var response = await chatClient.GetResponseAsync(
[new ChatMessage(ChatRole.User, "Hello!")],
new ChatOptions { ModelId = "gemini-2.0-flash" });
// IEmbeddingGenerator
IEmbeddingGenerator<string, Embedding<float>> generator = new GeminiClient(apiKey);
var embeddings = await generator.GenerateAsync(
["Hello, world!"],
new EmbeddingGenerationOptions { ModelId = "gemini-embedding-001" });
Live API (Real-time Voice/Video)
The SDK supports the Gemini Live API for real-time bidirectional voice and video interactions over WebSocket:
using Google.Gemini;
using var client = new GeminiClient(apiKey);
// Connect to the Live API
await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
});
// Send text and receive audio responses
await session.SendTextAsync("Hello, how are you?");
await foreach (var message in session.ReadEventsAsync())
{
// Audio data in message.ServerContent.ModelTurn.Parts[].InlineData
if (message.ServerContent?.TurnComplete == true)
break;
}
Voice selection and speech config:
await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
SpeechConfig = new SpeechConfig
{
VoiceConfig = new VoiceConfig
{
PrebuiltVoiceConfig = new PrebuiltVoiceConfig
{
VoiceName = "Kore", // Aoede, Charon, Fenrir, Kore, Puck, etc.
},
},
},
},
});
Multi-turn conversation:
// Send conversation history before triggering a response
await session.SendClientContentAsync(
turns:
[
new Content
{
Role = "user",
Parts = [new Part { Text = "My name is Alice" }],
},
new Content
{
Role = "model",
Parts = [new Part { Text = "Nice to meet you, Alice!" }],
},
new Content
{
Role = "user",
Parts = [new Part { Text = "What's my name?" }],
},
],
turnComplete: true);
System instruction (customize model behavior):
await using var session = await client.ConnectLiveAsync(new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
SystemInstruction = new Content
{
Parts = [new Part { Text = "You are a friendly pirate. Always respond in pirate speak." }],
},
});
Tool calling:
var config = new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
Tools = [new Tool { FunctionDeclarations = [myFunction] }],
};
await using var session = await client.ConnectLiveAsync(config);
await session.SendTextAsync("What's the weather in London?");
await foreach (var message in session.ReadEventsAsync())
{
if (message.ToolCall is { } toolCall)
{
// Handle function call and send response
await session.SendToolResponseAsync([new FunctionResponse
{
Name = toolCall.FunctionCalls![0].Name,
Id = toolCall.FunctionCalls[0].Id,
Response = new { temperature = "15C" },
}]);
}
// Tool calls cancelled due to user interruption
if (message.ToolCallCancellation is { } cancellation)
{
Console.WriteLine($"Tool calls cancelled: {string.Join(", ", cancellation.Ids!)}");
}
if (message.ServerContent?.TurnComplete == true)
break;
}
Session resumption (reconnect without losing context):
var config = new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
SessionResumption = new LiveSessionResumptionConfig(),
};
await using var session1 = await client.ConnectLiveAsync(config);
// ... interact ...
var handle = session1.LastSessionResumptionHandle;
// Later, reconnect with the handle
var config2 = new LiveSetupConfig
{
// ... same config ...
SessionResumption = new LiveSessionResumptionConfig { Handle = handle },
};
await using var session2 = await client.ConnectLiveAsync(config2);
Output transcription (get text alongside audio responses):
var config = new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
OutputAudioTranscription = new LiveOutputAudioTranscription(),
};
await using var session = await client.ConnectLiveAsync(config);
await session.SendTextAsync("Tell me a joke");
await foreach (var message in session.ReadEventsAsync())
{
// Text transcription of the audio response
if (message.ServerContent?.OutputTranscription?.Text is { } text)
Console.Write(text);
if (message.ServerContent?.TurnComplete == true)
break;
}
Send audio/video:
// Send PCM audio (16-bit, 16kHz, little-endian, mono)
await session.SendAudioAsync(pcmBytes);
// Send audio with custom MIME type
await session.SendAudioAsync(audioBytes, "audio/pcm;rate=24000");
// Send video frame
await session.SendVideoAsync(jpegBytes, "image/jpeg");
// Stream video frames in a loop
foreach (var frame in videoFrames)
{
await session.SendVideoAsync(frame, "image/jpeg");
await Task.Delay(100); // ~10 fps
}
Auto-reconnect on GoAway (resilient sessions):
// ResilientLiveSession automatically reconnects when the server sends GoAway
await using var session = await client.ConnectResilientLiveAsync(new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
});
session.GoAwayReceived += (sender, goAway) =>
Console.WriteLine($"Server closing in {goAway.TimeLeft}, reconnecting...");
session.Reconnected += (sender, _) =>
Console.WriteLine("Reconnected successfully!");
await session.SendTextAsync("Hello!");
// Events keep flowing transparently across reconnections
await foreach (var message in session.ReadEventsAsync())
{
if (message.ServerContent?.TurnComplete == true)
break;
}
Input audio transcription (get text for audio you send):
var config = new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
InputAudioTranscription = new LiveInputAudioTranscription(),
};
await using var session = await client.ConnectLiveAsync(config);
await session.SendAudioAsync(pcmBytes);
await session.SendClientContentAsync(turns: [], turnComplete: true);
await foreach (var message in session.ReadEventsAsync())
{
// Text transcription of the audio you sent
if (message.ServerContent?.InputTranscription?.Text is { } text)
Console.Write($"[You said: {text}]");
if (message.ServerContent?.TurnComplete == true)
break;
}
<details> <summary><b>Advanced features</b> (compression, interruption, usage, GoAway, audio round-trip)</summary>
Context window compression (for longer sessions):
var config = new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
ContextWindowCompression = new LiveContextWindowCompression
{
SlidingWindow = new LiveSlidingWindow
{
TargetTokens = 1024, // tokens to retain after compression
},
},
};
Interruption handling (user speaks during model response):
await foreach (var message in session.ReadEventsAsync())
{
if (message.ServerContent?.Interrupted == true)
{
// Model response was cut short — user started speaking
Console.WriteLine("Model interrupted by user input");
}
if (message.ServerContent?.ModelTurn?.Parts is { } parts)
{
foreach (var part in parts)
{
// Process audio/text parts (may be partial if interrupted)
if (part.InlineData?.Data is { } audioData)
PlayAudio(audioData);
}
}
if (message.ServerContent?.TurnComplete == true)
break;
}
Usage metadata (track token consumption):
await foreach (var message in session.ReadEventsAsync())
{
if (message.UsageMetadata is { } usage)
{
Console.WriteLine($"Prompt tokens: {usage.PromptTokenCount}");
Console.WriteLine($"Response tokens: {usage.CandidatesTokenCount}");
Console.WriteLine($"Total tokens: {usage.TotalTokenCount}");
}
if (message.ServerContent?.TurnComplete == true)
break;
}
GoAway handling (graceful session migration):
await foreach (var message in session.ReadEventsAsync())
{
if (message.GoAway is { } goAway)
{
// Server is closing soon — reconnect using session resumption
Console.WriteLine($"Server closing in {goAway.TimeLeft}, reconnecting...");
break; // dispose session and reconnect with resumption handle
}
if (message.ServerContent?.TurnComplete == true)
break;
}
Audio round-trip (send and receive audio):
var config = new LiveSetupConfig
{
Model = "models/gemini-2.5-flash-native-audio-latest",
GenerationConfig = new GenerationConfig
{
ResponseModalities = [GenerationConfigResponseModalitie.Audio],
},
};
await using var session = await client.ConnectLiveAsync(config);
// Send PCM audio (16-bit, 16kHz, little-endian, mono)
await session.SendAudioAsync(pcmBytes);
// Signal end of user turn
await session.SendClientContentAsync(turns: [], turnComplete: true);
// Receive audio response
await foreach (var message in session.ReadEventsAsync())
{
if (message.ServerContent?.ModelTurn?.Parts is { } parts)
{
foreach (var part in parts)
{
if (part.InlineData?.Data is { } audioData)
{
// audioData is base64-decoded PCM audio (24kHz)
PlayAudio(audioData);
}
}
}
if (message.ServerContent?.TurnComplete == true)
break;
}
</details>
Embedding Models
| Model | Dimensions | Description |
|---|---|---|
gemini-embedding-001 |
768 (default) | Stable text embedding model |
gemini-embedding-2-preview |
3072 (default) | Latest multimodal model — text, images, video, audio, PDFs. Matryoshka dimensions support |
The SDK defaults to gemini-embedding-001. For best retrieval quality, use gemini-embedding-2-preview (note: embedding spaces are incompatible between the two models). See Google's embedding guide for details.
API Version
This SDK targets the v1beta API, which is the full-featured version used by Google's own SDKs (Python, JS, Go). The v1 (stable) API only exposes ~30 of the 70+ available endpoints and lacks critical features like tool calling, file upload, context caching, and grounding.
Support
Priority place for bugs: https://github.com/tryAGI/Google_Generative_AI/issues Priority place for ideas and general questions: https://github.com/tryAGI/Google_Generative_AI/discussions Discord: https://discord.gg/Ca2xhfBf3v
Acknowledgments
This project is supported by JetBrains through the Open Source Support Program.
This project is supported by CodeRabbit through the Open Source Support Program.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- Microsoft.Extensions.AI.Abstractions (>= 10.4.0)
NuGet packages (1)
Showing the top 1 NuGet packages that depend on Google_Gemini:
| Package | Downloads |
|---|---|
|
LangChain
LangChain meta-package with the most used things. |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 0.10.2-dev.85 | 24 | 3/22/2026 |
| 0.10.2-dev.64 | 27 | 3/20/2026 |
| 0.10.2-dev.58 | 28 | 3/20/2026 |
| 0.10.2-dev.57 | 27 | 3/20/2026 |
| 0.10.2-dev.55 | 24 | 3/19/2026 |
| 0.10.2-dev.54 | 26 | 3/19/2026 |
| 0.10.2-dev.53 | 22 | 3/19/2026 |
| 0.10.2-dev.52 | 22 | 3/19/2026 |
| 0.10.2-dev.51 | 28 | 3/19/2026 |
| 0.10.2-dev.44 | 34 | 3/19/2026 |
| 0.10.2-dev.43 | 27 | 3/19/2026 |
| 0.10.2-dev.42 | 23 | 3/19/2026 |
| 0.10.2-dev.41 | 22 | 3/19/2026 |
| 0.10.2-dev.40 | 26 | 3/19/2026 |
| 0.10.2-dev.39 | 26 | 3/19/2026 |
| 0.10.2-dev.38 | 27 | 3/19/2026 |
| 0.10.2-dev.37 | 25 | 3/19/2026 |
| 0.10.2-dev.36 | 27 | 3/19/2026 |
| 0.10.2-dev.35 | 23 | 3/19/2026 |
| 0.10.1 | 99 | 3/18/2026 |