AIAgentSharp.Gemini 1.0.11

dotnet add package AIAgentSharp.Gemini --version 1.0.11
                    
NuGet\Install-Package AIAgentSharp.Gemini -Version 1.0.11
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="AIAgentSharp.Gemini" Version="1.0.11" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="AIAgentSharp.Gemini" Version="1.0.11" />
                    
Directory.Packages.props
<PackageReference Include="AIAgentSharp.Gemini" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add AIAgentSharp.Gemini --version 1.0.11
                    
#r "nuget: AIAgentSharp.Gemini, 1.0.11"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package AIAgentSharp.Gemini@1.0.11
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=AIAgentSharp.Gemini&version=1.0.11
                    
Install as a Cake Addin
#tool nuget:?package=AIAgentSharp.Gemini&version=1.0.11
                    
Install as a Cake Tool

AIAgentSharp.Gemini

Google Gemini integration for AIAgentSharp - LLM-powered agents with Google Gemini models.

Features

  • Google Gemini API Integration: Full support for Google's Gemini API
  • Function Calling: Native support for Gemini function calling
  • Configurable Models: Support for Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 1.0 Pro, and more
  • Advanced Configuration: Comprehensive settings for temperature, tokens, and more
  • Enterprise Support: Custom endpoint support
  • Error Handling: Robust error handling with retry logic
  • Logging: Comprehensive logging for debugging and monitoring
  • Multi-modal Support: Support for text and image inputs

Installation

dotnet add package AIAgentSharp.Gemini

Quick Start

Basic Usage

using AIAgentSharp;
using AIAgentSharp.Agents;
using AIAgentSharp.Gemini;
using AIAgentSharp.StateStores;

// Create Gemini client
var llm = new GeminiLlmClient("your-gemini-api-key");

// Create agent
var store = new MemoryAgentStateStore();
var agent = new Agent(llm, store);

// Run agent
var result = await agent.RunAsync("my-agent", "Hello, how are you?", new List<ITool>());
Console.WriteLine(result.FinalOutput);

Advanced Configuration

// Create optimized configuration for agent reasoning
var config = GeminiConfiguration.CreateForAgentReasoning();

// Or create custom configuration
var customConfig = new GeminiConfiguration
{
    Model = "gemini-1.5-pro",
    Temperature = 0.2f,
    MaxTokens = 6000,
    EnableFunctionCalling = true,
    MaxRetries = 5,
    RequestTimeout = TimeSpan.FromMinutes(3)
};

// Create client with configuration
var llm = new GeminiLlmClient("your-gemini-api-key", customConfig);

Enterprise Usage

var config = new GeminiConfiguration
{
    Model = "gemini-1.5-pro",
    ApiBaseUrl = "https://your-custom-endpoint.com/v1",
    EnableFunctionCalling = true
};

var llm = new GeminiLlmClient("your-api-key", config);

Configuration Options

Model Selection

// Cost-effective reasoning
var config = GeminiConfiguration.CreateForCostEfficiency(); // Uses gemini-1.5-flash

// Balanced performance
var config = GeminiConfiguration.CreateForAgentReasoning(); // Uses gemini-1.5-pro

// Maximum capability
var config = GeminiConfiguration.CreateForCreativeTasks(); // Uses gemini-1.5-pro

Temperature Settings

var config = new GeminiConfiguration
{
    Temperature = 0.0f,  // Most deterministic
    // Temperature = 0.1f,  // Good for reasoning (default)
    // Temperature = 0.7f,  // Balanced creativity
    // Temperature = 1.0f,  // More creative
};

Token Management

var config = new GeminiConfiguration
{
    MaxTokens = 2000,    // Shorter responses, lower cost
    // MaxTokens = 4000,  // Default
    // MaxTokens = 8000,  // Longer responses, higher cost
};

Function Calling

The Gemini client supports native function calling:

// Create tools
var tools = new List<ITool>
{
    new WeatherTool(),
    new CalculatorTool()
};

// Function calling is automatically enabled when tools are provided
var result = await agent.RunAsync("agent-id", "What's 2+2 and the weather in NYC?", tools);

Multi-modal Support

The Gemini client supports both text and image inputs:

// Text-only conversation
var result = await agent.RunAsync("agent-id", "Describe this image", tools);

// Multi-modal conversation (when supported by the model)
// Note: Multi-modal support depends on the specific Gemini model used

Error Handling

The client includes robust error handling:

try
{
    var result = await agent.RunAsync("agent-id", "Hello", tools);
}
catch (InvalidOperationException ex)
{
    Console.WriteLine($"Gemini API error: {ex.Message}");
}
catch (OperationCanceledException)
{
    Console.WriteLine("Request was cancelled or timed out");
}

Logging

Enable detailed logging for debugging:

var logger = new ConsoleLogger();
var llm = new GeminiLlmClient("your-api-key", logger: logger);

// Or use your own logger implementation
var customLogger = new MyCustomLogger();
var llm = new GeminiLlmClient("your-api-key", logger: customLogger);

Performance Optimization

Cost Optimization

var config = GeminiConfiguration.CreateForCostEfficiency();
// - Uses gemini-1.5-flash (cheaper)
// - Lower max tokens
// - Fewer retries
// - Shorter timeout

Speed Optimization

var config = new GeminiConfiguration
{
    Model = "gemini-1.5-flash",  // Fastest model
    MaxTokens = 2000,            // Shorter responses
    RequestTimeout = TimeSpan.FromMinutes(1),  // Shorter timeout
    MaxRetries = 2               // Fewer retries
};

Quality Optimization

var config = new GeminiConfiguration
{
    Model = "gemini-1.5-pro",    // Highest quality
    Temperature = 0.0f,          // Most deterministic
    MaxTokens = 8000,            // Longer responses
    RequestTimeout = TimeSpan.FromMinutes(5)  // Longer timeout
};

Supported Models

  • gemini-1.5-pro: Most capable model, best for complex reasoning and multi-modal tasks
  • gemini-1.5-flash: Fast and cost-effective, good for most tasks
  • gemini-1.0-pro: Legacy model, still effective for simple tasks

Implementation Notes

This package uses direct HTTP API calls to Google's Gemini API. The implementation includes:

  • Direct HTTP client implementation
  • JSON parsing for function calls
  • Comprehensive error handling
  • Retry logic with exponential backoff
  • Full support for all Gemini API features

Dependencies

  • AIAgentSharp: Core agent framework
  • System.Text.Json: JSON serialization
  • System.Net.Http: HTTP client functionality

License

This package is licensed under the MIT License - see the LICENSE file for details.

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.0.11 112 8/22/2025
1.0.10 97 8/22/2025
1.0.9 125 8/20/2025
1.0.8 127 8/18/2025