Mythosia.AI.Providers.Alibaba 1.2.0

dotnet add package Mythosia.AI.Providers.Alibaba --version 1.2.0
                    
NuGet\Install-Package Mythosia.AI.Providers.Alibaba -Version 1.2.0
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Mythosia.AI.Providers.Alibaba" Version="1.2.0" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Mythosia.AI.Providers.Alibaba" Version="1.2.0" />
                    
Directory.Packages.props
<PackageReference Include="Mythosia.AI.Providers.Alibaba" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Mythosia.AI.Providers.Alibaba --version 1.2.0
                    
#r "nuget: Mythosia.AI.Providers.Alibaba, 1.2.0"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Mythosia.AI.Providers.Alibaba@1.2.0
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Mythosia.AI.Providers.Alibaba&version=1.2.0
                    
Install as a Cake Addin
#tool nuget:?package=Mythosia.AI.Providers.Alibaba&version=1.2.0
                    
Install as a Cake Tool

Mythosia.AI.Providers.Alibaba

Package Summary

Mythosia.AI.Providers.Alibaba adds Alibaba Cloud / Qwen provider support for Mythosia.AI through QwenService.

It is intended for projects that want to keep using the common AIService abstraction while calling Qwen-compatible chat completion endpoints through DashScope, vLLM, or Ollama.

Features

  • Qwen chat completion support through QwenService
  • Streaming response support with token usage reporting (TokenUsage)
  • Function calling support
  • Shared Mythosia.AI conversation and message abstractions
  • Optional thinking-mode control for supported Qwen models
  • Compatible endpoint handling for DashScope, vLLM, and Ollama

Installation

dotnet add package Mythosia.AI.Providers.Alibaba

Model Catalog

The provider now includes a broader built-in model catalog for Qwen 3 and Qwen 3.5 families.

service.ChangeModel(AlibabaModels.Qwen3_32B);
service.ChangeModel(AlibabaModels.Qwen3_5_27B);
service.ChangeModel(AlibabaModels.Qwen3_5_397B);

Thinking Mode Behavior

QwenService applies platform-specific thinking request formatting for Qwen 3-family models.

Platform Thinking On Thinking Off
DashScope enable_thinking = true enable_thinking = false
vLLM chat_template_kwargs.enable_thinking = true chat_template_kwargs.enable_thinking = false
Ollama reasoning.effort = "high" (파라미터 생략)

Thinking off 시 DashScope / vLLM에는 명시적으로 enable_thinking = false가 전송되어 서버 기본값에 의한 의도치 않은 thinking 활성화를 방지합니다.

Request-Scoped Reasoning Control

When you are using the shared AIRequestProfile APIs from Mythosia.AI, QwenService can disable reasoning for a single call without changing the long-lived service configuration.

var answer = await service.GetCompletionAsync(
    "Summarize this policy without reasoning output.",
    new AIRequestProfile
    {
        DisableReasoning = true
    });

Quick Start with vLLM

using Mythosia.AI.Providers.Alibaba;

var httpClient = new HttpClient();
var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
    .UseQwen3_32BModel();

var response = await service.GetCompletionAsync("Hello, Qwen!");
Console.WriteLine(response);

Quick Start with Ollama

using Mythosia.AI.Providers.Alibaba;

var httpClient = new HttpClient();
var service = new QwenService("http://localhost:11434", EndpointPlatform.Ollama, httpClient)
    .UseQwen3_32BModel();

var response = await service.GetCompletionAsync("Hello, Qwen!");
Console.WriteLine(response);

Configure Thinking Mode

using Mythosia.AI.Providers.Alibaba;

var service = new QwenService("http://localhost:11434", EndpointPlatform.Ollama, httpClient)
{
    ThinkingMode = QwenThinking.On
};

Using Quantized or Custom Model Names

Some Qwen deployments do not use the default public model identifier.

Examples:

  • Quantized variants such as qwen3:32b-q4_K_M
  • Custom deployment names from a gateway or self-hosted endpoint
  • Provider-specific aliases that differ from the built-in AlibabaModels constants

In those cases, keep the service configured normally and set ModelIdOverride to the exact deployed model name that your endpoint expects.

using Mythosia.AI.Providers.Alibaba;

var service = new QwenService("http://localhost:11434", EndpointPlatform.Ollama, httpClient)
{
    ThinkingMode = QwenThinking.On,
    ModelIdOverride = "qwen3:32b-q4_K_M"
};

var response = await service.GetCompletionAsync("Summarize this document.");

You can also combine a built-in base model selection with a different runtime model ID:

var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
    .UseQwen3_32BModel();

service.ModelIdOverride = "my-qwen3-32b-awq";

var response = await service.GetCompletionAsync("Explain this code.");

This is useful when:

  • The displayed deployment name is different from the public Qwen model name
  • You are routing through Ollama, vLLM, or a custom proxy
  • You want to use a quantized build while keeping the general service configuration readable

How Model Names Behave on Ollama

When EndpointPlatform.Ollama is used, built-in model names are automatically converted to Ollama-style IDs.

Example:

  • qwen3-32bqwen3:32b

If your Ollama model name is not the default converted name, set ModelIdOverride explicitly.

Streaming Example

var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
    .UseQwen3_32BModel();

await foreach (var chunk in service.StreamAsync("Explain transformers simply."))
{
    if (!string.IsNullOrWhiteSpace(chunk.Content))
        Console.Write(chunk.Content);
}

Function Calling Example

var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
    .UseQwen3_32BModel()
    .WithFunction(
        "get_weather",
        "Gets the current weather for a city",
        ("city", "City name", true),
        (string city) => $"Weather in {city}: sunny, 24°C");

var result = await service.GetCompletionAsync("What's the weather in Seoul?");

Notes

  • Use EndpointPlatform.DashScope for Alibaba Cloud DashScope endpoints (default)
  • Use EndpointPlatform.Vllm for OpenAI-compatible vLLM endpoints
  • Use EndpointPlatform.Ollama for local Ollama servers
  • Model selection can be changed with provider model constants or ModelIdOverride
  • For the shared core API surface and advanced features, see the main Mythosia.AI package documentation

Documentation

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
.NET Core netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.1 is compatible. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.2.0 43 3/30/2026
1.1.0 51 3/28/2026
1.0.2 63 3/24/2026
1.0.1 76 3/22/2026
1.0.0 124 3/15/2026

v1.2.0: Recompiled against Mythosia.AI v5.2.0 — binary compatible with IAIService interface (Abstractions split). No API changes.