Google_GenerativeAI 2.1.5

dotnet add package Google_GenerativeAI --version 2.1.5                
NuGet\Install-Package Google_GenerativeAI -Version 2.1.5                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Google_GenerativeAI" Version="2.1.5" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Google_GenerativeAI --version 2.1.5                
#r "nuget: Google_GenerativeAI, 2.1.5"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Google_GenerativeAI as a Cake Addin
#addin nuget:?package=Google_GenerativeAI&version=2.1.5

// Install Google_GenerativeAI as a Cake Tool
#tool nuget:?package=Google_GenerativeAI&version=2.1.5                

Google GenerativeAI (Gemini) 🌟

Nuget package License: MIT

Introduction πŸ“–

Unofficial C# .Net Google GenerativeAI (Gemini Pro) SDK based on REST APIs.
This new version is a complete rewrite of the previous SDK, designed to improve performance, flexibility, and ease of use. It seamlessly integrates with LangChain.net, providing easy methods for JSON-based interactions and function calling with Google Gemini models.

Highlights of this release include:

  1. Complete Rewrite – The SDK has been entirely rebuilt for improved reliability and maintainability.
  2. LangChain.net Support πŸš€ – Enables you to directly use this SDK within LangChain.net workflows.
  3. Enhanced JSON Mode πŸ› οΈ – Includes straightforward methods to handle Google Gemini’s JSON mode.
  4. Function Calling with Code Generator πŸ§‘β€πŸ’» – Simplifies function calling by providing a source generator that creates argument classes and extension methods automatically.
  5. Multi-Modal Functionality 🎨🎡 – Provides methods to easily incorporate text, images, and other data for multimodal operations with Google Gemini.
  6. Vertex AI Support πŸŒ₯️ – Introducing direct support for Vertex AI, including multiple authentication methods such as OAuth, Service Account, and ADC (Application Default Credentials).
  7. Multimodal Live API - Enables real-time interaction with multimodal content (text, images, audio) for dynamic and responsive applications.
  8. New Packages πŸ“¦ – Modularizes features to help you tailor the SDK to your needs:
Package Version Description
Google_GenerativeAI.Tools NuGet version Provides function tooling and code generation using tryAgi CSharpToJsonSchema. Ideal for scenarios where you need to define functions and automate their JSON schema generation.
Google_GenerativeAI.Auth NuGet version Offers various Google authentication mechanisms, including OAuth, Service Account, and Application Default Credentials (ADC). Streamlines credential management.
Google_GenerativeAI.Microsoft NuGet version Implements the IChatClient interface from Microsoft.Extensions.AI, enabling seamless integration with Microsoft’s AI ecosystem and services.
Google_GenerativeAI.Web NuGet version Contains extension methods to integrate GenerativeAI into .NET web applications, simplifying setup for web projects that utilize Gemini models.
Google_GenerativeAI.Live NuGet version Enables Google Multimodal Live API integration for advanced realtime communication in .NET applications.

By merging the best of the old version with these new capabilities, the SDK provides a smoother developer experience and a wide range of features to leverage Google Gemini.


Usage πŸ’‘

Use this library to access Google Gemini (Generative AI) models easily. You can start by installing the NuGet package and obtaining the necessary API key from your Google account.


Quick Start πŸš€

Below are two common ways to initialize and use the SDK. For a full list of supported approaches, please refer to our Wiki Page


1. Using Google AI 🌐

  1. Obtain an API Key πŸ”‘
    Visit Google AI Studio and generate your API key.

  2. Install the NuGet Package πŸ“¦
    You can install the package via NuGet Package Manager:

    Install-Package Google_GenerativeAI
    

    Or using the .NET CLI:

    dotnet add package Google_GenerativeAI
    
  3. Initialize GoogleAI βš™οΈ
    Provide the API key when creating an instance of the GoogleAI class:

    var googleAI = new GoogleAI("Your_API_Key");
    
  4. Obtain a GenerativeModel πŸ€–
    Create a generative model using a model name (for example, "models/gemini-1.5-flash"):

    var model = googleAI.CreateGenerativeModel("models/gemini-1.5-flash");
    
  5. Generate Content ✍️
    Call the GenerateContentAsync method to get a response:

    var response = await model.GenerateContentAsync("How is the weather today?");
    Console.WriteLine(response.Text());
    
  6. Full Code at a Glance πŸ–ΌοΈ

    var apiKey = "YOUR_GOOGLE_API_KEY";
    var googleAI = new GoogleAI(apiKey);
    
    var googleModel = googleAI.CreateGenerativeModel("models/gemini-1.5-flash");
    var googleResponse = await googleModel.GenerateContentAsync("How is the weather today?");
    Console.WriteLine("Google AI Response:");
    Console.WriteLine(googleResponse.Text());
    Console.WriteLine();
    

2. Using Vertex AI 🌟

  1. Install the Google Cloud SDK (CLI) πŸ› 
    By default, Vertex AI uses Application Default Credentials (ADC). Follow Google’s official instructions to install and set up the Google Cloud CLI.

  2. Initialize VertexAI βš™οΈ
    Once the SDK is set up locally, create an instance of the VertexAI class:

    var vertexAI = new VertexAI();
    
  3. Obtain a GenerativeModel πŸ€–
    Just like with GoogleAI, choose a model name and create the generative model:

    var vertexModel = vertexAI.CreateGenerativeModel("models/gemini-1.5-flash");
    
  4. Generate Content ✍️
    Use the GenerateContentAsync method to produce text:

    var response = await vertexModel.GenerateContentAsync("Hello from Vertex AI!");
    Console.WriteLine(response.Text());
    
  5. Full code at a Glance πŸ‘€

    var vertexAI = new VertexAI(); //usage Google Cloud CLI's ADC to get the Access token
    var vertexModel = vertexAI.CreateGenerativeModel("models/gemini-1.5-flash");
    var vertexResponse = await vertexModel.GenerateContentAsync("Hello from Vertex AI!");
    Console.WriteLine("Vertex AI Response:");
    Console.WriteLine(vertexResponse.Text());
    

Chat Mode πŸ’¬

For multi-turn, conversational use cases, you can start a chat session by calling the StartChat method on an instance of GenerativeModel. You can use any of the previously mentioned initialization methods (environment variables, direct constructor, configuration files, ADC, service accounts, etc.) to set up credentials for your AI service first. Then you would:

  1. Create a GenerativeModel instance (e.g., via googleAI.CreateGenerativeModel(...) or vertexAI.CreateGenerativeModel(...)).
  2. Call StartChat() on the generated model to initialize a conversation.
  3. Use GenerateContentAsync(...) to exchange messages in the conversation.

Below is an example using the model name "gemini-1.5-flash":

// Example: Starting a chat session with a Google AI GenerativeModel

// 1) Initialize your AI instance (GoogleAI) with credentials or environment variables
var googleAI = new GoogleAI("YOUR_GOOGLE_API_KEY");

// 2) Create a GenerativeModel using the model name "gemini-1.5-flash"
var generativeModel = googleAI.CreateGenerativeModel("models/gemini-1.5-flash");

// 3) Start a chat session from the GenerativeModel
var chatSession = generativeModel.StartChat();

// 4) Send and receive messages
var firstResponse = await chatSession.GenerateContentAsync("Welcome to the Gemini 1.5 Flash chat!");
Console.WriteLine("First response: " + firstResponse.Text());

// Continue the conversation
var secondResponse = await chatSession.GenerateContentAsync("How can you help me with my AI development?");
Console.WriteLine("Second response: " + secondResponse.Text());

The same approach applies if you’re using Vertex AI:

// Example: Starting a chat session with a Vertex AI GenerativeModel

// 1) Initialize your AI instance (VertexAI) using one of the available authentication methods
var vertexAI = new VertexAI(); 

// 2) Create a GenerativeModel using "gemini-1.5-flash"
var generativeModel = vertexAI.CreateGenerativeModel("models/gemini-1.5-flash");

// 3) Start a chat
var chatSession = generativeModel.StartChat();

// 4) Send a chat message and read the response
var response = await chatSession.GenerateContentAsync("Hello from Vertex AI Chat using Gemini 1.5 Flash!");
Console.WriteLine(response.Text());

Each conversation preserves the context from previous messages, making it ideal for multi-turn or multi-step reasoning tasks. For more details, please check our Wiki.

Streaming 🌊

The GenerativeAI SDK supports streaming responses, allowing you to receive and process parts of the model's output as they become available, rather than waiting for the entire response to be generated. This is particularly useful for long-running generation tasks or for creating more responsive user interfaces.

  • StreamContentAsync(): Use this method for streaming text responses. It returns an IAsyncEnumerable<GenerateContentResponse>, which you can iterate over using await foreach.

Example (StreamContentAsync()):

using GenerativeAI;

// ... (Assume model is already initialized) ...

var prompt = "Write a long story about a cat.";
await foreach (var chunk in model.StreamContentAsync(prompt))
{
    Console.Write(chunk.Text); // Print each chunk as it arrives
}
Console.WriteLine(); // Newline after the complete response

Multimodal Capabilities with Overloaded GenerateContentAsync Methods 🌐

Google Gemini models can work with more than just text – they can handle images, audio, and videos too! This opens up a lot of possibilities for developers. The GenerativeAI SDK makes it super easy to use these features.

Below are several examples showcasing how to incorporate files into your AI prompts:

  1. Directly providing a local file path.
  2. Referencing a remote file with its MIME type.
  3. Creating a request object to add multiple files (local or remote).

1. Generating Content with a Local File πŸ“‚

If you have a file available locally, simply pass in the file path:

// Generate content from a local file (e.g., an image)
var response = await geminiModel.GenerateContentAsync(
    "Describe the details in this uploaded image",
    @"C:\path\to\local\image.jpg"
);

Console.WriteLine(response.Text());

2. Generating Content with a Remote File 🌎

When your file is hosted remotely, provide the file URI and its corresponding MIME type:

// Generate content from a remote file (e.g., a PDF)
var response = await geminiModel.GenerateContentAsync(
    "Summarize the information in this PDF document",
    "https://example.com/path/to/sample.pdf",
    "application/pdf"
);

Console.WriteLine(response.Text());

3. Initializing a Request and Attaching Files πŸ“‹

For granular control, you can create a GenerateContentRequest, set a prompt, and attach one or more files (local or remote) before calling GenerateContentAsync:

// Create a request with a text prompt
var request = new GenerateContentRequest();
request.AddText("Describe what's in this document");

// Attach a local file
request.AddInlineFile(@"C:\files\example.png");

// Attach a remote file with its MIME type
request.AddRemoteFile("https://example.com/path/to/sample.pdf", "application/pdf");

// Generate the content with attached files
var response = await geminiModel.GenerateContentAsync(request);
Console.WriteLine(response.Text());

With these overloads and request-based approaches, you can seamlessly integrate additional file-based context into your prompts, enabling richer answers and unlocking more advanced AI-driven workflows.


Easy JSON Handling πŸ“

The GenerativeAI SDK makes it simple to work with JSON data from Gemini. You have several ways some of those are:

1. Automatic JSON Handling:

  • Use GenerateObjectAsync<T> to directly get the deserialized object:

    var myObject = await model.GenerateObjectAsync<SampleJsonClass>(request);
    
  • Use GenerateContentAsync and then ToObject<T> to deserialize the response:

    var response = await model.GenerateContentAsync<SampleJsonClass>(request);
    var myObject = response.ToObject<SampleJsonClass>();
    
  • Request: Use the UseJsonMode<T> extension method when creating your GenerateContentRequest. This tells the SDK to expect a JSON response of the specified type.

    var request = new GenerateContentRequest();
    request.UseJsonMode<SampleJsonClass>();
    request.AddText("Give me a really good response.");
    

2. Manual JSON Parsing:

  • Request: Create a standard GenerateContentRequest.

    var request = new GenerateContentRequest();
    request.AddText("Give me some JSON.");
    

    or

    var request = new GenerateContentRequest();
    request.GenerationConfig = new GenerationConfig()
            {
                ResponseMimeType = "application/json",
                ResponseSchema = new SampleJsonClass()
            }
    request.AddText("Give me a really good response.");
    
  • Response: Use ExtractJsonBlocks() to get the raw JSON blocks from the response, and then use ToObject<T> to deserialize them.

    var response = await model.GenerateContentAsync(request);
    var jsonBlocks = response.ExtractJsonBlocks();
    var myObjects = jsonBlocks.Select(block => block.ToObject<SampleJsonClass>());
    

These options give you flexibility in how you handle JSON data with the GenerativeAI SDK.

Read the wiki for more options.

Gemini Tools and Function Calling πŸ› οΈ

The GenerativeAI SDK provides built-in tools to enhance Gemini's capabilities, including Google Search, Google Search Retrieval, and Code Execution. These tools allow Gemini to interact with the outside world and perform actions beyond generating text.

1. Inbuilt Tools (GoogleSearch, GoogleSearchRetrieval, and Code Execution):

You can easily enable or disable these tools by setting the corresponding properties on the GenerativeModel:

  • UseGoogleSearch: Enables or disables the Google Search tool.
  • UseGrounding: Enables or disables the Google Search Retrieval tool (often used for grounding responses in factual information).
  • UseCodeExecutionTool: Enables or disables the Code Execution tool.
// Example: Enabling Google Search and Code Execution
var model = new GenerativeModel(apiKey: "YOUR_API_KEY");
model.UseGoogleSearch = true;
model.UseCodeExecutionTool = true;

// Example: Disabling all inbuilt tools.
var model = new GenerativeModel(apiKey: "YOUR_API_KEY");
model.UseGoogleSearch = false;
model.UseGrounding = false; 
model.UseCodeExecutionTool = false;

2. Function Calling πŸ”§

Function calling lets you integrate custom functionality with Gemini by defining functions it can call. This requires the GenerativeAI.Tools package.

  • Setup:

    1. Define an interface for your functions, using the [GenerateJsonSchema()] attribute.
    2. Implement the interface.
    3. Create tools and calls using AsTools() and AsCalls().
    4. Create a GenericFunctionTool instance.
    5. Add the tool to your GenerativeModel with AddFunctionTool().
  • FunctionCallingBehaviour: Customize behavior (e.g., auto-calling, error handling) using the GenerativeModel's FunctionCallingBehaviour property:

    • FunctionEnabled (default: true): Enables/disables function calling.
    • AutoCallFunction (default: true): Gemini automatically calls functions.
    • AutoReplyFunction (default: true): Gemini automatically generates responses after function calls.
    • AutoHandleBadFunctionCalls (default: false): Attempts to handle errors from incorrect calls
// Install-Package GenerativeAI.Tools
using GenerativeAI;
using GenerativeAI.Tools;

[GenerateJsonSchema()]
public interface IWeatherFunctions // Simplified Interface
{
    [Description("Get the current weather")]
    Weather GetCurrentWeather(string location);
}

public class WeatherService : IWeatherFunctions
{  // ... (Implementation - see full example in wiki) ...
    public Weather GetCurrentWeather(string location)
      =>  new Weather
        {
            Location = location,
            Temperature = 30.0,
            Unit = Unit.Celsius,
            Description = "Sunny",
        };
}

// --- Usage ---
var service = new WeatherService();
var tools = service.AsTools();
var calls = service.AsCalls();
var tool = new GenericFunctionTool(tools, calls);
var model = new GenerativeModel(apiKey: "YOUR_API_KEY");
model.AddFunctionTool(tool);
//Example for FunctionCallingBehaviour
model.FunctionCallingBehaviour = new FunctionCallingBehaviour { AutoCallFunction = false }; // Example

var result = await model.GenerateContentAsync("Weather in SF?");
Console.WriteLine(result.Text);

For more details and options, see the wiki.

Multimodal Live API πŸŽ›οΈ

The Google_GenerativeAI SDK now conveniently supports the Google Multimodal Live API through the Google_GenerativeAI.Live package. This module enables real-time, interactive conversations with Gemini models by leveraging WebSockets for text and audio data exchange. It’s ideally suited for building live, multimodal experiences, such as chat or voice-enabled applications.

Key Features

The Google_GenerativeAI.Live package provides a comprehensive implementation of the Multimodal Live API, offering:

  • Real-time Communication: Enables two-way transmission of text and audio data for live conversational experiences.
  • Modality Support: Allows model responses in multiple formats, including text and audio, depending on your configuration.
  • Asynchronous Operations: Fully asynchronous API ensures non-blocking calls for data transmission and reception.
  • Event-driven Design: Exposes events for key stages of interaction, including connection status, message reception, and audio streaming.
  • Audio Handling: Built-in support for streaming audio, with configurability for sample rates and headers.
  • Custom Tool Integration: Allows extending functionality by integrating custom tools directly into the interaction.
  • Robust Error Handling: Manages errors gracefully, along with reconnection support.
  • Flexible Configuration: Supports customizing generation configurations, safety settings, and system instructions before establishing a connection.

How to Get Started

To leverage the Multimodal Live API in your project, you’ll need to install the Google_GenerativeAI.Live NuGet package and create a MultiModalLiveClient. Here’s a quick overview:

Installation

Install the Google_GenerativeAI.Live package via NuGet:

Install-Package Google_GenerativeAI.Live
Example Usage

With the MultiModalLiveClient, interacting with the Multimodal Live API is simple:

using GenerativeAI.Live;

public async Task RunLiveConversationAsync()
{
    var client = new MultiModalLiveClient(
        platformAdapter: new GoogleAIPlatformAdapter(), 
        modelName: "gemini-1.5-flash-exp", 
        generationConfig: new GenerationConfig { ResponseModalities = { Modality.TEXT, Modality.AUDIO } }, 
        safetySettings: null, 
        systemInstruction: "You are a helpful assistant."
    );

    client.Connected += (s, e) => Console.WriteLine("Connected!");
    client.TextChunkReceived += (s, e) => Console.WriteLine($"Text chunk: {e.TextChunk}");
    client.AudioChunkReceived += (s, e) => Console.WriteLine($"Audio received: {e.Buffer.Length} bytes");
    
    await client.ConnectAsync();

    await client.SentTextAsync("Hello, Gemini! What's the weather like?");
    await client.SendAudioAsync(audioData: new byte[] { /* audio bytes */ }, audioContentType: "audio/pcm; rate=16000");

    Console.ReadKey();
    await client.DisconnectAsync();
}

Events

The MultiModalLiveClient provides various events to plug into for real-time updates during interaction:

  • Connected: Triggered when the connection is successfully established.
  • Disconnected: Triggered when the connection ends gracefully or abruptly.
  • MessageReceived: Raised when any data (text or audio) is received.
  • TextChunkReceived: Triggered when chunks of text are received in real time.
  • AudioChunkReceived: Triggered when audio chunks are streamed from Gemini.
  • AudioReceiveCompleted: Triggered when a complete audio response is received.
  • ErrorOccurred: Raised when an error occurs during interaction or connection.

For more details and examples, refer to the wiki.

Semantic Search Retrieval (RAG) with Google AQA πŸ”Ž

The Google_GenerativeAI library makes implementing Retrieval-Augmented Generation (RAG) incredibly easy. RAG combines the strengths of Large Language Models (LLMs) with the precision of information retrieval. Instead of relying solely on the LLM's pre-trained knowledge, a RAG system first retrieves relevant information from a knowledge base (a "corpus" of documents) and then uses that information to augment the LLM's response. This allows the LLM to generate more accurate, factual, and context-aware answers.

This library leverages Google's Attributed Question Answering (AQA) model, which is specifically designed for semantic search and question answering. AQA excels at understanding the intent behind a question and finding the most relevant passages within a corpus to answer it. Key features include:

  • Semantic Understanding: AQA goes beyond simple keyword matching. It understands the meaning of the query and the documents.
  • Attribution: AQA provides an "Answerable Probability" score, indicating its confidence in the retrieved answer.
  • Easy Integration: The Google_GenerativeAI library provides a simple API to create corpora, add documents, and perform semantic searches.

For a step-by-step tutorial on implementing Semantic Search Retrieval with Google AQA, see the wiki page.

Coming Soon

The following features are planned for future releases of the GenerativeAI SDK:

  • Semantic Search Retrieval (RAG πŸ”Ž): Use Gemini as a Retrieval-Augmented Generation (RAG) system, allowing it to incorporate information from external sources into its responses. (Released on 20th Feb, 2025)
  • Image Generation 🎨: Generate images with imagen from text prompts, expanding Gemini's capabilities beyond text and code.
  • Multimodal Live APIπŸŽ›οΈ: Bidirectional Multimodal Live Chat with Gemini 2.0 Flash (Added on 22nd Fed, 2025)
  • Model Tuning πŸŽ›οΈ: Customize Gemini models to better suit your specific needs and data.

Credits πŸ™Œ

Thanks to HavenDV for LangChain.net SDK


Explore the Wiki πŸ“š

Dive deeper into the GenerativeAI SDK! The wiki is your comprehensive resource for:

  • Detailed Guides πŸ”: Step-by-step tutorials on various features and use cases.
  • Advanced Usage πŸ› οΈ: Learn about advanced configuration options, error handling, and best practices.
  • Complete Code Examples πŸ’»: Find ready-to-run code snippets and larger project examples.

We encourage you to explore the wiki to unlock the full potential of the GenerativeAI SDK! πŸš€


Feel free to open an issue or submit a pull request if you encounter any problems or want to propose improvements! Your feedback helps us continue to refine and expand this SDK.

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 is compatible.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (7)

Showing the top 5 NuGet packages that depend on Google_GenerativeAI:

Package Downloads
LangChain.Providers.Google

Google Gemini Chat model provider.

Google_GenerativeAI.Auth

This library is part of the Google_GenerativeAI SDK and provides various API authentication implementations for seamless usage of the Google_GenerativeAI SDK.

Google_GenerativeAI.Tools

Provides a set of tools and concrete implementations to facilitate function calling with the Google_GenerativeAI SDK, including support for code generation.

Google_GenerativeAI.Web

This library is part of the Google_GenerativeAI SDK and provides .NET Web Application integration for seamless usage of the Google_GenerativeAI SDK.

Google_GenerativeAI.Microsoft

This library provides Microsoft.Extensions.AI integration with Google_GenerativeAI SDK for .NET applications.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
2.1.5 46 2/22/2025
2.1.4 36 2/22/2025
2.0.14 165 2/19/2025
2.0.11 231 2/18/2025
2.0.8 89 2/18/2025
2.0.7 208 2/17/2025
2.0.6 241 2/17/2025
2.0.4 218 2/16/2025
2.0.3 76 2/16/2025
2.0.2 149 2/16/2025
2.0.1 63 2/16/2025
2.0.0 136 2/16/2025
1.0.2 31,802 7/14/2024
1.0.1 33,931 6/6/2024
1.0.0 1,224 5/22/2024
0.1.20 2,057 4/29/2024
0.1.19 9,395 4/4/2024
0.1.18 280 4/3/2024
0.1.17 110 4/3/2024
0.1.16 115 4/3/2024
0.1.15 543 2/25/2024
0.1.14 2,539 2/24/2024
0.1.13 115 2/24/2024
0.1.12 8,283 12/19/2023
0.1.11 152 12/19/2023
0.1.10 118 12/19/2023
0.1.9 163 12/18/2023
0.1.7 106 12/18/2023
0.1.6 126 12/18/2023
0.1.5 135 12/18/2023
0.1.4 128 12/18/2023
0.1.3 127 12/18/2023
0.1.2 129 12/18/2023
0.1.1 155 12/18/2023
0.1.0 173 12/17/2023