YoloDotNet 4.1.0
See the version list below for details.
dotnet add package YoloDotNet --version 4.1.0
NuGet\Install-Package YoloDotNet -Version 4.1.0
<PackageReference Include="YoloDotNet" Version="4.1.0" />
<PackageVersion Include="YoloDotNet" Version="4.1.0" />
<PackageReference Include="YoloDotNet" />
paket add YoloDotNet --version 4.1.0
#r "nuget: YoloDotNet, 4.1.0"
#:package YoloDotNet@4.1.0
#addin nuget:?package=YoloDotNet&version=4.1.0
#tool nuget:?package=YoloDotNet&version=4.1.0
<img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/994287a9-556c-495f-8acf-1acae8d64ac0" height=24> YoloDotNet 🚀
Blazing-fast, production-ready YOLO inference for .NET
YoloDotNet is a modular, lightweight C# library for real-time computer vision and YOLO-based inference in .NET.
It provides high-performance inference for modern YOLO model families (YOLOv5u through YOLOv26, YOLO-World, YOLO-E, and RT-DETR), with explicit control over execution, memory, and preprocessing.
Built on .NET 8, ONNX Runtime, and SkiaSharp, YoloDotNet intentionally
avoids heavy computer vision frameworks such as OpenCV.
There is no Python runtime, no hidden preprocessing, and no implicit behavior —
only the components required for fast, predictable inference on Windows,
Linux, and macOS.
No Python. No magic. Just fast, deterministic YOLO — done properly for .NET.
⭐ Why YoloDotNet?
YoloDotNet is designed for developers who need:
- ✅ Pure .NET — no Python runtime, no scripts
- ✅ Real performance — CPU, CUDA / TensorRT, OpenVINO, CoreML, DirectML
- ✅ Explicit configuration — predictable accuracy and memory usage
- ✅ Production readiness — engine caching, long-running stability
- ✅ Large image support — not limited to toy resolutions
- ✅ Multiple vision tasks — detection, OBB, segmentation, pose, classification
Ideal for desktop apps, backend services, and real-time vision pipelines that require deterministic behavior and full control.
🆕 What’s New v4.1
- Added support for
Yolo26model suite - Added support for
RT-DETRmodels - Improved performance across all tasks, with reduced allocation pressure and lower per-frame latency.
- Improved performance on video inference
- Relicensed to MIT
📖 Full release history: CHANGELOG.md
🚀 Quick Start
1️⃣ Install the core package
dotnet add package YoloDotNet
2️⃣ Install exactly one execution provider
# CPU (recommended starting point)
dotnet add package YoloDotNet.ExecutionProvider.Cpu
# Hardware-accelerated execution (choose one)
dotnet add package YoloDotNet.ExecutionProvider.Cuda
dotnet add package YoloDotNet.ExecutionProvider.OpenVino
dotnet add package YoloDotNet.ExecutionProvider.CoreML
dotnet add package YoloDotNet.ExecutionProvider.DirectML
💡 Note: The CUDA execution provider includes optional TensorRT acceleration.
No separate TensorRT package is required.
3️⃣ Run object detection
using SkiaSharp;
using YoloDotNet;
using YoloDotNet.ExecutionProvider.Cpu;
using var yolo = new Yolo(new YoloOptions
{
ExecutionProvider = new CpuExecutionProvider("model.onnx")
});
using var image = SKBitmap.Decode("image.jpg");
var results = yolo.RunObjectDetection(image, confidence: 0.25, iou: 0.7);
image.Draw(results);
image.Save("result.jpg");
You’re now running YOLO inference in pure C#.
💡 Important: Accuracy Depends on Configuration
YOLO inference accuracy is not automatic.
Preprocessing settings such as image resize mode, sampling method, and confidence/IoU thresholds must match how the model was trained.
These settings directly control the accuracy–performance tradeoff and should be treated as part of the model itself.
📖 Before tuning models or comparing results, read:
👉 Accuracy & Configuration Guide
Supported Tasks
| Classification | Object Detection | OBB Detection | Segmentation | Pose Estimation |
|---|---|---|---|---|
| <img src="https://user-images.githubusercontent.com/35733515/297393507-c8539bff-0a71-48be-b316-f2611c3836a3.jpg" width=300> | <img src="https://user-images.githubusercontent.com/35733515/273405301-626b3c97-fdc6-47b8-bfaf-c3a7701721da.jpg" width=300> | <img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/d15c5b3e-18c7-4c2c-9a8d-1d03fb98dd3c" width=300> | <img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/3ae97613-46f7-46de-8c5d-e9240f1078e6" width=300> | <img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/b7abeaed-5c00-4462-bd19-c2b77fe86260" width=300> |
| <sub>pexels.com</sub> | <sub>pexels.com</sub> | <sub>pexels.com</sub> | <sub>pexels.com</sub> | <sub>pexels.com</sub> |
📁 Demos
Hands-on examples are available in the demo folder:
Includes image inference, video streams, GPU acceleration, segmentation, and large-image workflows.
Execution Providers
| Provider | Windows | Linux | macOS | Documentation |
|---|---|---|---|---|
| CPU | ✅ | ✅ | ✅ | CPU README |
| CUDA / TensorRT | ✅ | ✅ | ❌ | CUDA README |
| OpenVINO | ✅ | ✅ | ❌ | OpenVINO README |
| CoreML | ❌ | ❌ | ✅ | CoreML README |
| DirectML | ✅ | ❌ | ❌ | DirectML README |
ℹ️ Only one execution provider package may be referenced.
Mixing providers will cause native runtime conflicts.
⚡ Performance Characteristics
YoloDotNet focuses on stable, low-overhead inference where runtime cost is dominated by the execution provider and model.
📊 Benchmarks: /test/YoloDotNet.Benchmarks
- Stable latency after warm-up
- Clean scaling from CPU → GPU → TensorRT
- Predictable allocation behavior
- Suitable for real-time and long-running services
🚀 Modular Execution Providers
- Core package is provider-agnostic
- Execution providers are separate NuGet packages
- Native ONNX Runtime dependencies are isolated
Why this matters: fewer conflicts, predictable deployment, and production-safe behavior.
Support YoloDotNet
⭐ Star the repo
💬 Share feedback
🤝 Sponsor development
License
MIT License
Copyright (c) Niklas Swärd
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Why MIT?
YoloDotNet is designed as a low-level, production-grade YOLO inference engine. It does not include pretrained models, training pipelines, or hosted services.
The MIT license was chosen to maximize freedom and clarity for developers and organizations:
- No copyleft requirements
- No network-use clauses
- No restrictions on commercial or proprietary deployment
This makes YoloDotNet suitable for use in enterprise software, embedded systems, backend services, and closed-source products without licensing friction.
Model licensing is entirely separate and is determined by the source and terms of the ONNX models supplied by the user.
Model Licensing & Responsibility
YoloDotNet is licensed under the MIT License and provides an ONNX inference engine for YOLO models exported using Ultralytics YOLO tooling.
This project does not include, distribute, download, or bundle any pretrained models.
Users must supply their own ONNX models.
YOLO ONNX models produced using Ultralytics tooling are typically licensed under AGPL-3.0 or a separate commercial license from Ultralytics.
YoloDotNet does not impose, modify, or transfer any license terms related to user-supplied models.
Users are solely responsible for ensuring that their use of any model complies with the applicable license terms, including requirements related to commercial use, distribution, or network deployment.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- SkiaSharp (>= 3.119.1)
NuGet packages (7)
Showing the top 5 NuGet packages that depend on YoloDotNet:
| Package | Downloads |
|---|---|
|
Snet.Yolo.Server
识别组件:Server(Yolo多模型管理与智能识别服务,检测、定向检测、分类、分割、姿态) |
|
|
YoloDotNet.ExecutionProvider.Cpu
YoloDotNet.ExecutionProvider.Cpu provides a fully portable CPU-based execution provider for YoloDotNet using ONNX Runtime’s built-in CPU backend. This execution provider requires no additional system-level dependencies and works out of the box on Windows, Linux, and macOS. It is ideal for development, testing, CI environments, and production scenarios where GPU or NPU acceleration is unavailable. The CPU provider integrates seamlessly with YoloDotNet’s modular execution provider architecture introduced in v4.0 and supports all inference tasks including object detection, segmentation, classification, pose estimation, and OBB detection. |
|
|
VL.YoloDotNet
YoloDotNet for VL |
|
|
YoloDotNet.ExecutionProvider.Cuda
CUDA and TensorRT execution provider for YoloDotNet, enabling GPU-accelerated inference on NVIDIA hardware using ONNX Runtime. This execution provider supports CUDA for general GPU acceleration and optional NVIDIA TensorRT integration for maximum performance, lower latency, and optimized engine execution. It is designed for high-throughput and real-time inference workloads on Windows and Linux systems with supported NVIDIA GPUs. The provider is fully compatible with the YoloDotNet core library and follows the new modular, execution-provider-agnostic architecture introduced in YoloDotNet v4.0. |
|
|
YoloDotNet.ExecutionProvider.OpenVino
YoloDotNet OpenVINO Execution Provider enables optimized inference using Intel® OpenVINO™ on supported Intel CPUs, integrated GPUs, and accelerators. This execution provider integrates ONNX Runtime with Intel OpenVINO to deliver high-performance, low-latency inference on Intel hardware across Windows and Linux. It is ideal for CPU-focused deployments, edge systems, and environments where Intel hardware acceleration is preferred over CUDA-based solutions. The provider is fully modular and designed to work with the execution-provider-agnostic YoloDotNet core library introduced in v4.0. Only one execution provider should be referenced per project. |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on YoloDotNet:
| Repository | Stars |
|---|---|
|
Webreaper/Damselfly
Damselfly is a server-based Photograph Management app. The goal of Damselfly is to index an extremely large collection of images, and allow easy search and retrieval of those images, using metadata such as the IPTC keyword tags, as well as the folder and file names. Damselfly includes support for object/face detection.
|
| Version | Downloads | Last Updated |
|---|---|---|
| 4.2.0 | 1,284 | 2/8/2026 |
| 4.1.0 | 703 | 1/17/2026 |
| 4.0.0 | 2,049 | 12/14/2025 |
| 3.1.1 | 8,000 | 7/31/2025 |
| 3.1.0 | 344 | 7/30/2025 |
| 3.0.0 | 781 | 7/15/2025 |
| 2.3.0 | 8,077 | 3/15/2025 |
| 2.2.0 | 10,052 | 10/13/2024 |
| 2.1.0 | 1,422 | 10/6/2024 |
| 2.0.0 | 3,754 | 7/12/2024 |
| 1.7.0 | 1,211 | 5/2/2024 |
| 1.6.0 | 690 | 4/4/2024 |
| 1.5.0 | 371 | 3/14/2024 |
| 1.4.0 | 2,381 | 3/6/2024 |
| 1.3.0 | 758 | 2/25/2024 |
| 1.2.0 | 420 | 2/5/2024 |
| 1.1.0 | 355 | 1/17/2024 |
| 1.0.0 | 534 | 12/8/2023 |
YoloDotNet v4.1 is a focused update that expands model compatibility and improves runtime performance.
This release adds support for the YOLOv26 model suite and RT-DETR models, extending coverage of modern YOLO-based architectures.
Performance has been improved across all tasks with reduced allocation pressure, lower per-frame latency, and more stable and efficient video inference.
YoloDotNet has also been relicensed to MIT, simplifying adoption and usage in commercial and open-source projects.