Mythosia.AI.Providers.Alibaba
1.2.0
dotnet add package Mythosia.AI.Providers.Alibaba --version 1.2.0
NuGet\Install-Package Mythosia.AI.Providers.Alibaba -Version 1.2.0
<PackageReference Include="Mythosia.AI.Providers.Alibaba" Version="1.2.0" />
<PackageVersion Include="Mythosia.AI.Providers.Alibaba" Version="1.2.0" />
<PackageReference Include="Mythosia.AI.Providers.Alibaba" />
paket add Mythosia.AI.Providers.Alibaba --version 1.2.0
#r "nuget: Mythosia.AI.Providers.Alibaba, 1.2.0"
#:package Mythosia.AI.Providers.Alibaba@1.2.0
#addin nuget:?package=Mythosia.AI.Providers.Alibaba&version=1.2.0
#tool nuget:?package=Mythosia.AI.Providers.Alibaba&version=1.2.0
Mythosia.AI.Providers.Alibaba
Package Summary
Mythosia.AI.Providers.Alibaba adds Alibaba Cloud / Qwen provider support for Mythosia.AI through QwenService.
It is intended for projects that want to keep using the common AIService abstraction while calling Qwen-compatible chat completion endpoints through DashScope, vLLM, or Ollama.
Features
- Qwen chat completion support through
QwenService - Streaming response support with token usage reporting (
TokenUsage) - Function calling support
- Shared
Mythosia.AIconversation and message abstractions - Optional thinking-mode control for supported Qwen models
- Compatible endpoint handling for
DashScope,vLLM, andOllama
Installation
dotnet add package Mythosia.AI.Providers.Alibaba
Model Catalog
The provider now includes a broader built-in model catalog for Qwen 3 and Qwen 3.5 families.
service.ChangeModel(AlibabaModels.Qwen3_32B);
service.ChangeModel(AlibabaModels.Qwen3_5_27B);
service.ChangeModel(AlibabaModels.Qwen3_5_397B);
Thinking Mode Behavior
QwenService applies platform-specific thinking request formatting for Qwen 3-family models.
| Platform | Thinking On | Thinking Off |
|---|---|---|
| DashScope | enable_thinking = true |
enable_thinking = false |
| vLLM | chat_template_kwargs.enable_thinking = true |
chat_template_kwargs.enable_thinking = false |
| Ollama | reasoning.effort = "high" |
(파라미터 생략) |
Thinking off 시 DashScope / vLLM에는 명시적으로 enable_thinking = false가 전송되어 서버 기본값에 의한 의도치 않은 thinking 활성화를 방지합니다.
Request-Scoped Reasoning Control
When you are using the shared AIRequestProfile APIs from Mythosia.AI, QwenService can disable reasoning for a single call without changing the long-lived service configuration.
var answer = await service.GetCompletionAsync(
"Summarize this policy without reasoning output.",
new AIRequestProfile
{
DisableReasoning = true
});
Quick Start with vLLM
using Mythosia.AI.Providers.Alibaba;
var httpClient = new HttpClient();
var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
.UseQwen3_32BModel();
var response = await service.GetCompletionAsync("Hello, Qwen!");
Console.WriteLine(response);
Quick Start with Ollama
using Mythosia.AI.Providers.Alibaba;
var httpClient = new HttpClient();
var service = new QwenService("http://localhost:11434", EndpointPlatform.Ollama, httpClient)
.UseQwen3_32BModel();
var response = await service.GetCompletionAsync("Hello, Qwen!");
Console.WriteLine(response);
Configure Thinking Mode
using Mythosia.AI.Providers.Alibaba;
var service = new QwenService("http://localhost:11434", EndpointPlatform.Ollama, httpClient)
{
ThinkingMode = QwenThinking.On
};
Using Quantized or Custom Model Names
Some Qwen deployments do not use the default public model identifier.
Examples:
- Quantized variants such as
qwen3:32b-q4_K_M - Custom deployment names from a gateway or self-hosted endpoint
- Provider-specific aliases that differ from the built-in
AlibabaModelsconstants
In those cases, keep the service configured normally and set ModelIdOverride to the exact deployed model name that your endpoint expects.
using Mythosia.AI.Providers.Alibaba;
var service = new QwenService("http://localhost:11434", EndpointPlatform.Ollama, httpClient)
{
ThinkingMode = QwenThinking.On,
ModelIdOverride = "qwen3:32b-q4_K_M"
};
var response = await service.GetCompletionAsync("Summarize this document.");
You can also combine a built-in base model selection with a different runtime model ID:
var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
.UseQwen3_32BModel();
service.ModelIdOverride = "my-qwen3-32b-awq";
var response = await service.GetCompletionAsync("Explain this code.");
This is useful when:
- The displayed deployment name is different from the public Qwen model name
- You are routing through Ollama, vLLM, or a custom proxy
- You want to use a quantized build while keeping the general service configuration readable
How Model Names Behave on Ollama
When EndpointPlatform.Ollama is used, built-in model names are automatically converted to Ollama-style IDs.
Example:
qwen3-32b→qwen3:32b
If your Ollama model name is not the default converted name, set ModelIdOverride explicitly.
Streaming Example
var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
.UseQwen3_32BModel();
await foreach (var chunk in service.StreamAsync("Explain transformers simply."))
{
if (!string.IsNullOrWhiteSpace(chunk.Content))
Console.Write(chunk.Content);
}
Function Calling Example
var service = new QwenService("http://localhost:8000", EndpointPlatform.Vllm, httpClient)
.UseQwen3_32BModel()
.WithFunction(
"get_weather",
"Gets the current weather for a city",
("city", "City name", true),
(string city) => $"Weather in {city}: sunny, 24°C");
var result = await service.GetCompletionAsync("What's the weather in Seoul?");
Notes
- Use
EndpointPlatform.DashScopefor Alibaba Cloud DashScope endpoints (default) - Use
EndpointPlatform.Vllmfor OpenAI-compatiblevLLMendpoints - Use
EndpointPlatform.Ollamafor local Ollama servers - Model selection can be changed with provider model constants or
ModelIdOverride - For the shared core API surface and advanced features, see the main
Mythosia.AIpackage documentation
Documentation
- Main package: GitHub Repository
- Core package docs: Mythosia.AI Core Package
- Release notes: RELEASE_NOTES.md
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
| .NET Core | netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
| .NET Standard | netstandard2.1 is compatible. |
| MonoAndroid | monoandroid was computed. |
| MonoMac | monomac was computed. |
| MonoTouch | monotouch was computed. |
| Tizen | tizen60 was computed. |
| Xamarin.iOS | xamarinios was computed. |
| Xamarin.Mac | xamarinmac was computed. |
| Xamarin.TVOS | xamarintvos was computed. |
| Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.1
- Mythosia.AI (>= 5.2.0)
- TiktokenSharp (>= 1.2.1)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
v1.2.0: Recompiled against Mythosia.AI v5.2.0 — binary compatible with IAIService interface (Abstractions split). No API changes.