UnderOcean 1.0.3

dotnet add package UnderOcean --version 1.0.3
                    
NuGet\Install-Package UnderOcean -Version 1.0.3
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="UnderOcean" Version="1.0.3" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="UnderOcean" Version="1.0.3" />
                    
Directory.Packages.props
<PackageReference Include="UnderOcean" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add UnderOcean --version 1.0.3
                    
#r "nuget: UnderOcean, 1.0.3"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package UnderOcean@1.0.3
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=UnderOcean&version=1.0.3
                    
Install as a Cake Addin
#tool nuget:?package=UnderOcean&version=1.0.3
                    
Install as a Cake Tool

UnderOcean 🌊

A powerful .NET library for AI conversation management with Ollama integration, featuring intelligent Mixture of Experts (MoE) routing, embedding services, and seamless RAG capabilities.

Architecture .NET License

šŸš€ Features

  • 🧠 Mixture of Experts (MoE) Routing: Intelligent query analysis and automatic routing to optimal processing modes
  • šŸ“š Retrieval-Augmented Generation (RAG): Advanced semantic search with hybrid scoring and smart context injection
  • šŸ› ļø Function Calling: Extensible tool system with automatic parameter extraction and validation
  • šŸ’¬ Conversation Management: Stateful conversation handling with context awareness
  • ⚔ Performance Optimization: Built-in monitoring, caching, and fallback strategies
  • šŸ”„ Hybrid Processing: Seamlessly combine RAG and tools for complex queries

šŸ“¦ Installation

dotnet add package UnderOcean

šŸ—ļø Architecture

UnderOcean uses a sophisticated MoE architecture that automatically routes conversations based on context analysis. See ARCHITECTURE.md for detailed system design.

User Query → MoE Router → Route Selection → Processing Engine → Response
                ↓
        [Native|RAG|Tools|Hybrid]

šŸŽÆ Quick Start

1. Basic Setup

using Microsoft.Extensions.DependencyInjection;
using UnderOcean.Abstractions;
using UnderOcean.Services;

var services = new ServiceCollection();
services.AddHttpClient();
services.AddSingleton<IOllamaClient>(sp => new OllamaClient(sp.GetRequiredService<HttpClient>()));
services.AddSingleton<IEmbeddingService, EmbeddingService>();
services.AddSingleton<IMoERouter>(sp => new MoERouter(sp.GetRequiredService<IOllamaClient>()));
services.AddSingleton<IChatService, ChatService>();
services.AddSingleton<IConversationManager, ConversationManager>();

var provider = services.BuildServiceProvider();

2. Simple Conversation

var conversationManager = provider.GetRequiredService<IConversationManager>();

var request = new ConversationRequest(
    UserMessage: "Hello, how are you?",
    SystemPrompt: "You are a helpful assistant.",
    ConversationHistory: new List<ChatMessage>(),
    KnowledgeBase: new List<EmbeddingContext>(),
    AvailableTools: new List<ToolSchema>(),
    Mode: ConversationMode.Auto // Let MoE router decide
);

var response = await conversationManager.ProcessAsync(request, CancellationToken.None);
Console.WriteLine($"Response: {response.Response.Message.Content}");

šŸ“š End-to-End RAG Integration

Step 1: Prepare Knowledge Base from Documents

var embeddingService = provider.GetRequiredService<IEmbeddingService>();
var conversationManager = provider.GetRequiredService<IConversationManager>();

// Load and process documents
var knowledgeBase = new List<EmbeddingContext>();

// Method 1: Load from text files
var filePaths = new[] { "docs/manual.txt", "docs/faq.txt" };
var documentsKB = await conversationManager.LoadKnowledgeBaseAsync(filePaths, CancellationToken.None);
knowledgeBase.AddRange(documentsKB);

// Method 2: Load from structured data (JSON/CSV)
var structuredData = await LoadStructuredDataAsync("data/products.json");
foreach (var item in structuredData)
{
    var text = $"Product: {item.Name}. Description: {item.Description}. Price: {item.Price}";
    var vector = await embeddingService.EmbedAsync(text, CancellationToken.None);
    knowledgeBase.Add(new EmbeddingContext { Text = text, Vector = vector });
}

// Method 3: Load from existing embeddings
var embeddingLines = await File.ReadAllLinesAsync("embeddings.jsonl");
foreach (var line in embeddingLines.Where(l => !string.IsNullOrWhiteSpace(l)))
{
    try
    {
        var jsonObj = JsonSerializer.Deserialize<JsonElement>(line);
        if (jsonObj.TryGetProperty("text", out var textElement) &&
            jsonObj.TryGetProperty("embedding", out var embeddingElement))
        {
            var text = textElement.GetString() ?? string.Empty;
            var embedding = embeddingElement.EnumerateArray()
                                          .Select(e => (float)e.GetDouble())
                                          .ToArray();
            knowledgeBase.Add(new EmbeddingContext { Text = text, Vector = embedding });
        }
    }
    catch (JsonException ex)
    {
        Console.WriteLine($"Skipping invalid JSON line: {ex.Message}");
    }
}

Console.WriteLine($"šŸ“š Knowledge base loaded with {knowledgeBase.Count} entries");

Step 2: RAG-Enabled Conversation

var conversationHistory = new List<ChatMessage>();

// System prompt optimized for RAG
var systemPrompt = @"You are an intelligent assistant with access to a comprehensive knowledge base. 
When answering questions:
1. ALWAYS prioritize information from the knowledge base when available
2. Provide specific, detailed answers based on the retrieved context
3. If information is not in the knowledge base, clearly state this limitation
4. Cite relevant information without exposing technical details about retrieval";

// Interactive RAG session
while (true)
{
    Console.Write("\nYou: ");
    var userInput = Console.ReadLine();
    if (string.IsNullOrEmpty(userInput) || userInput.ToLower() == "quit") break;

    var request = new ConversationRequest(
        UserMessage: userInput,
        SystemPrompt: systemPrompt,
        ConversationHistory: conversationHistory.ToList(),
        KnowledgeBase: knowledgeBase, // šŸ“š Your knowledge base
        AvailableTools: new List<ToolSchema>(),
        Mode: ConversationMode.Auto // MoE will decide if RAG is needed
    );

    var response = await conversationManager.ProcessAsync(request, CancellationToken.None);
    
    Console.WriteLine($"\nšŸ¤–: {response.Response.Message.Content}");
    
    // Update conversation history
    conversationHistory.Add(new ChatMessage { Role = "user", Content = userInput });
    conversationHistory.Add(new ChatMessage { Role = "assistant", Content = response.Response.Message.Content });
}

Step 3: Advanced RAG with Forced Mode

// Force RAG mode for knowledge-intensive queries
var ragRequest = new ConversationRequest(
    UserMessage: "What are the technical specifications of our latest product?",
    SystemPrompt: systemPrompt,
    ConversationHistory: conversationHistory,
    KnowledgeBase: knowledgeBase,
    AvailableTools: new List<ToolSchema>(),
    Mode: ConversationMode.ForceRAG // šŸ”§ Force RAG processing
);

var ragResponse = await conversationManager.ProcessAsync(ragRequest, CancellationToken.None);

šŸ› ļø Function Calling Integration

Define Tools

var tools = new List<ToolSchema>
{
    new()
    {
        Function = new ToolFunction
        {
            Name = "GetWeather",
            Description = "Get current weather information for a specific city",
            Parameters = new
            {
                type = "object",
                properties = new
                {
                    city = new 
                    { 
                        type = "string", 
                        description = "The city name (required)" 
                    },
                    country = new 
                    { 
                        type = "string", 
                        description = "Country code (optional)" 
                    }
                },
                required = new[] { "city" }
            }
        }
    },
    new()
    {
        Function = new ToolFunction
        {
            Name = "SearchProducts",
            Description = "Search for products in the inventory",
            Parameters = new
            {
                type = "object",
                properties = new
                {
                    query = new { type = "string", description = "Search query" },
                    category = new { type = "string", description = "Product category filter" },
                    maxResults = new { type = "integer", description = "Maximum number of results" }
                },
                required = new[] { "query" }
            }
        }
    }
};

Tool-Enabled Conversation

var toolRequest = new ConversationRequest(
    UserMessage: "What's the weather like in Istanbul and show me winter jackets",
    SystemPrompt: "You are a helpful assistant with access to weather data and product search.",
    ConversationHistory: conversationHistory,
    KnowledgeBase: knowledgeBase,
    AvailableTools: tools, // šŸ› ļø Available function calls
    Mode: ConversationMode.Auto
);

var response = await conversationManager.ProcessAsync(toolRequest, CancellationToken.None);

// Check if tools were called
if (response.Response.Message.ToolCalls?.Any() == true)
{
    Console.WriteLine($"šŸ”§ Tools called: {string.Join(", ", response.Response.Message.ToolCalls.Select(t => t.Function?.Name))}");
}

šŸ”„ Hybrid Mode: RAG + Tools

var hybridRequest = new ConversationRequest(
    UserMessage: "Based on our product documentation, what's the weather-appropriate clothing for today in London?",
    SystemPrompt: "You have access to both product knowledge and weather data. Use both to provide comprehensive recommendations.",
    ConversationHistory: conversationHistory,
    KnowledgeBase: knowledgeBase,     // šŸ“š Product documentation
    AvailableTools: tools,           // šŸ› ļø Weather API
    Mode: ConversationMode.ForceHybrid // šŸ”„ Use both RAG and tools
);

var hybridResponse = await conversationManager.ProcessAsync(hybridRequest, CancellationToken.None);

āš™ļø Advanced Configuration

Custom Model Selection

// The MoE router automatically selects optimal models, but you can influence selection:
// Native: gpt-oss:20b (fast conversational)
// RAG: llama3.1:8b (balanced speed/quality)  
// Tools: qwen2.5:32b (strong function calling)
// Hybrid: llama3.1:8b (versatile)

Performance Monitoring

// Enable detailed performance logging
var request = new ConversationRequest(
    UserMessage: userInput,
    SystemPrompt: systemPrompt,
    ConversationHistory: conversationHistory,
    KnowledgeBase: knowledgeBase,
    AvailableTools: tools,
    Mode: ConversationMode.Auto
);

var stopwatch = Stopwatch.StartNew();
var response = await conversationManager.ProcessAsync(request, CancellationToken.None);
stopwatch.Stop();

Console.WriteLine($"ā±ļø Total processing time: {stopwatch.ElapsedMilliseconds}ms");
// Detailed timing information is logged to console automatically

Conversation Modes

public enum ConversationMode
{
    Auto,           // šŸ¤– Let MoE router decide (recommended)
    ForceNative,    // šŸ’¬ Force native conversation only
    ForceRAG,       // šŸ“š Force knowledge base retrieval
    ForceTools,     // šŸ› ļø Force function calling
    ForceHybrid     // šŸ”„ Force combined RAG + tools
}

šŸ“Š Core Components

Component Purpose Key Features
ConversationManager High-level orchestration Request processing, mode handling, knowledge base loading
MoERouter Intelligent routing SLM-based analysis, route decision, model selection
ChatService Core conversation logic Route execution, RAG processing, tool handling
EmbeddingService Vector operations Text embedding, batch processing
OllamaClient API communication Model interaction, request/response handling

šŸ› ļø Helper Methods

Loading Knowledge Base from Various Sources

// CSV files
public static async Task<List<EmbeddingContext>> LoadFromCsvAsync(string csvPath, IEmbeddingService embeddingService)
{
    var contexts = new List<EmbeddingContext>();
    var lines = await File.ReadAllLinesAsync(csvPath);
    
    foreach (var line in lines.Skip(1)) // Skip header
    {
        var columns = line.Split(',');
        if (columns.Length >= 2)
        {
            var text = $"{columns[0]}: {columns[1]}"; // Combine relevant columns
            var vector = await embeddingService.EmbedAsync(text, CancellationToken.None);
            contexts.Add(new EmbeddingContext { Text = text, Vector = vector });
        }
    }
    
    return contexts;
}

// Database integration
public static async Task<List<EmbeddingContext>> LoadFromDatabaseAsync(string connectionString, IEmbeddingService embeddingService)
{
    var contexts = new List<EmbeddingContext>();
    // Your database loading logic here
    // Convert database records to text and generate embeddings
    return contexts;
}

šŸ“‹ Requirements

  • .NET 8.0 or later
  • Ollama running locally or accessible via network
  • Required Ollama Models:
    • nomic-embed-text:v1.5 (embeddings)
    • qwen2.5:7b (routing decisions)
    • llama3.1:8b (RAG processing)
    • qwen2.5:32b (function calling)
    • gpt-oss:20b (native conversation)

Install Required Models

ollama pull nomic-embed-text:v1.5
ollama pull qwen2.5:7b
ollama pull llama3.1:8b
ollama pull qwen2.5:32b
ollama pull gpt-oss:20b

šŸ“ˆ Performance Tips

  1. Pre-compute Embeddings: Generate embeddings offline for large knowledge bases
  2. Batch Processing: Use batch embedding for multiple texts
  3. Conversation History Management: Limit history to recent messages for performance
  4. Model Warm-up: Keep models loaded in Ollama for faster response times
  5. Caching: Implement embedding caching for frequently accessed content

šŸ”§ Troubleshooting

Common Issues

Q: RAG is not retrieving relevant information

// Ensure your knowledge base texts are properly chunked and relevant
// Lower the similarity threshold if needed
// Check embedding quality with test queries

Q: Tools are not being called

// Verify tool schemas are properly formatted
// Check if the query clearly indicates need for external data
// Use ConversationMode.ForceTools to test tool functionality

Q: Poor routing decisions

// The MoE router learns from conversation context
// Provide clear, descriptive system prompts
// Use conversation history for better context

šŸ“„ License

MIT License - see LICENSE file for details.

šŸ“š Documentation

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.0.3 174 8/28/2025
1.0.2 172 8/27/2025
1.0.1 172 8/27/2025
1.0.0 171 8/27/2025