YoloDotNet 4.0.0
dotnet add package YoloDotNet --version 4.0.0
NuGet\Install-Package YoloDotNet -Version 4.0.0
<PackageReference Include="YoloDotNet" Version="4.0.0" />
<PackageVersion Include="YoloDotNet" Version="4.0.0" />
<PackageReference Include="YoloDotNet" />
paket add YoloDotNet --version 4.0.0
#r "nuget: YoloDotNet, 4.0.0"
#:package YoloDotNet@4.0.0
#addin nuget:?package=YoloDotNet&version=4.0.0
#tool nuget:?package=YoloDotNet&version=4.0.0
<img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/994287a9-556c-495f-8acf-1acae8d64ac0" height=24> YoloDotNet v4.0
YoloDotNet is a blazing-fast, fully featured C# library for real-time object detection, OBB, segmentation, classification, pose estimation — and tracking — using YOLOv5u–v12, YOLO-World, and YOLO-E models.
Built on .NET 8 and powered by ONNX Runtime, YoloDotNet delivers high-performance inference on Windows, Linux, and macOS, using pluggable execution providers including CPU, CUDA / TensorRT, OpenVINO, and CoreML.
Designed for both flexibility and speed, YoloDotNet supports image and video processing out of the box, including live streams, frame skipping, and fully customizable visualizations.
Execution Providers:
See the table below to choose the right one for your platform.
Supported Tasks:
| Classification | Object Detection | OBB Detection | Segmentation | Pose Estimation |
|---|---|---|---|---|
| <img src="https://user-images.githubusercontent.com/35733515/297393507-c8539bff-0a71-48be-b316-f2611c3836a3.jpg" width=300> | <img src="https://user-images.githubusercontent.com/35733515/273405301-626b3c97-fdc6-47b8-bfaf-c3a7701721da.jpg" width=300> | <img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/d15c5b3e-18c7-4c2c-9a8d-1d03fb98dd3c" width=300> | <img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/3ae97613-46f7-46de-8c5d-e9240f1078e6" width=300> | <img src="https://github.com/NickSwardh/YoloDotNet/assets/35733515/b7abeaed-5c00-4462-bd19-c2b77fe86260" width=300> |
| <sub>image from pexels.com</sub> | <sub>image from pexels.com</sub> | <sub>image from pexels.com</sub> | <sub>image from pexels.com</sub> | <sub>image from pexels.com</sub> |
📖 Table of Contents
🚀 YoloDotNet v4.0 — Modular Execution, Maximum Performance
ℹ️ Breaking Change (v4.0)
YoloDotNet v4.0 introduces a new modular architecture and is therefore a breaking update.The core YoloDotNet package is now completely execution-provider agnostic and no longer ships with any ONNX Runtime dependencies.
Execution providers (CPU, CUDA, OpenVINO, CoreML, etc.) are now separate NuGet packages and must be referenced explicitly.What this means for you:
- You must install YoloDotNet and exactly one execution provider
- Execution provider selection and configuration code has changed
- Existing projects upgrading from v3.x will need to update their NuGet references and setup code
- The inference API remains familiar, but provider wiring is now explicit and platform-aware
The upside? Cleaner architecture, fewer native dependency conflicts, and a setup that behaves consistently across platforms.
What's new
✅ Fully modular execution providers
Execution providers have officially moved out and got their own place. YoloDotNet Core now focuses on what to run, not how to run it — leaving all the platform-specific heavy lifting to dedicated NuGet packages. The result? Cleaner projects, fewer native dependency tantrums, and behavior you can actually predict.✅ New execution providers: OpenVINO & CoreML
Two shiny new execution providers join the lineup:- Intel OpenVINO — Squeezes the most out of Intel CPUs and iGPUs on Windows and Linux
- Apple CoreML — Taps directly into Apple’s native ML stack for fast, hardware-accelerated inference on macOS Plug them in, run inference, enjoy the speed — no code changes, no drama.
✅ Improved CUDA execution provider
Smarter and more explicit GPU handling for CUDA and TensorRT, so your NVIDIA hardware does what you expect — and not what it feels like doing today.✅ Grayscale ONNX model support
Run inference on grayscale-only models when color is just wasted effort. Less data, less work, same results — because sometimes black and white is all you need.🔄 Dependency updates
- SkiaSharp updated to 3.119.1
- ONNX Runtime (CUDA) updated to 1.23.2
Fresh bits, fewer bugs, better vibes.
Requirements
To use YoloDotNet, you need:
- A YOLO model exported to ONNX format (opset 17)
- The YoloDotNet core NuGet package
- Exactly one execution provider package suitable for your target platform
ONNX Model (opset 17)
All models — including custom-trained models — must be exported to ONNX format with opset 17 for best performance and compatibility.
For instructions on exporting YOLO models to ONNX, see:
https://docs.ultralytics.com/modes/export/#usage-examples
Note: Dynamic models are not supported.
Installation
Install YoloDotNet Core
dotnet add package YoloDotNetSelect an Execution Provider
Execution Provider Windows Linux macOS Documentation CPU ✅ ✅ ✅ CPU README CUDA / TensorRT ✅ ✅ ❌ CUDA README OpenVINO ✅ ✅ ❌ OpenVINO README CoreML ❌ ❌ ✅ CoreML README ℹ️ Important limitation
Only one execution provider package may be referenced.
Execution providers ship their own ONNX Runtime native binaries, and mixing providers will overwrite shared DLLs and cause runtime errors.
🚀 Quick Start: Dive into the Demos
Can’t wait to see YoloDotNet in action? The demo projects are the fastest way to get started and explore everything this library can do.
Each demo showcases:
- Classification
- Object Detection
- OBB Detection
- Segmentation
- Pose Estimation
Oh, and it doesn’t stop there — there’s a demo for real-time video inference too! Whether you’re analyzing local video files or streaming live, the demos have you covered.
Each demo is packed with inline comments to help you understand how everything works under the hood. From model setup, execution provider, preprocessing to video streaming and result rendering — it's all there.
Pro tip: For detailed configuration options and usage guidance, check out the comments in the demo source files.
Open the YoloDotNet.Demo projects, build, run, and start detecting like a pro. ✨
Bare Minimum — Get Up and Running in a Snap
Sometimes you just want to see the magic happen without the bells and whistles. Here’s the absolute simplest way to load a model, run inference on an image, and get your detected objects:
using SkiaSharp;
using YoloDotNet;
using YoloDotNet.Enums;
using YoloDotNet.Models;
using YoloDotNet.Extensions;
using YoloDotNet.ExecutionProvider.Cpu;
public class Program
{
// ⚠️ Note: The accuracy of inference results depends heavily on how you configure preprocessing and thresholds.
// Make sure to read the README section "Accuracy Depends on Configuration":
// https://github.com/NickSwardh/YoloDotNet/tree/master#%EF%B8%8F-accuracy-depends-on-configuration
static void Main(string[] args)
{
// Instantiate yolo
using var yolo = new Yolo(new YoloOptions
{
// Select an execution provider for your target system
//
ExecutionProvider = new CpuExecutionProvider(model: "path/to/model.onnx"),
// ...other options here
});
// Load image
using var image = SKBitmap.Decode("image.jpg");
// Run inference
var results = yolo.RunObjectDetection(image, confidence: 0.25, iou: 0.7);
image.Draw(results); // Draw boxes and labels
image.Save("result.jpg"); // Save – boom, done!
}
}
That’s it! No fuss.
Of course, the real power lies in customizing the pipeline, streaming images/videos, or tweaking models… but this snippet gets you started in seconds.
Want more? Dive into the demos and source code for full examples, from video streams to segmentation and pose estimation.
⚠️ Accuracy Depends on Configuration
The accuracy of your results depends heavily on how you configure preprocessing and thresholds. Even with a correctly trained model, mismatched settings can cause accuracy loss. There is no one-size-fits-all configuration — optimal values depend on your dataset, how your model was trained, and your specific application needs.
🔑 Key Factors
- Image Preprocessing & Resize Mode
- Controlled via
ImageResize. - Must match the preprocessing used during training to ensure accurate detections.
| Proportional dataset | Stretched dataset |
|:------------:|:---------:|
|<img width="160" height="107" alt="proportional" src="https://github.com/user-attachments/assets/e95a2c5c-8032-4dee-a05a-a73b062a4965" />|<img width="160" height="160" alt="stretched" src="https://github.com/user-attachments/assets/90fa31cf-89dd-41e4-871c-76ae3215171f" />|
|Use
ImageResize.Proportional(default) if the dataset images were not distorted during training and their aspect ratio was preserved.|UseImageResize.Stretchedif the dataset images were resized directly to the model’s input size (e.g., 640x640), ignoring the original aspect ratio.| - Important: Selecting the wrong resize mode can reduce detection accuracy.
- Controlled via
- Sampling Options
- Controlled via
SamplingOptions. - Define how pixel data is resampled when resizing (e.g.,
Cubic,NearestNeighbor,Bilinear). This choice has a direct impact on the accuracy of your detections, as different resampling methods can slightly alter object shapes and edges. - YoloDotNet default:
💡 Tip: Check the ResizeImage Benchmarks for examples of differentSamplingOptions = new SKSamplingOptions(SKFilterMode.Nearest, SKMipmapMode.None);SamplingOptionsand to help you choose the best settings for your needs.
- Controlled via
- Confidence & IoU Thresholds
- Results are filtered based on thresholds you set during inference:
- Confidence → Minimum probability for a detection to count.
- IoU (Intersection-over-Union) → Overlap required to merge/suppress detections.
- Too low → more false positives.
- Too high → more missed detections.
- Fine-tune these values for your dataset and application.
- Results are filtered based on thresholds you set during inference:
💡 Tip: Start with the defaults, then adjust ImageResize, SamplingOptions, and Confidence/IoU thresholds based on your dataset for optimal detection results.
Make It Yours – Customize the Look
Want to give your detections a personal touch? Go ahead! If you're drawing bounding boxes on-screen, there’s full flexibility to style them just the way you like:
- Custom Colors – Use the built-in class-specific colors or define your own for every bounding box.
- Font Style & Size – Choose your favorite font, set the size, and even change the color for the labels.
- Custom Fonts – Yep, you can load your own font files to give your overlay a totally unique feel.
If that's not enough, check out the extension methods in the main YoloDotNet repository — a solid boilerplate for building even deeper customizations tailored exactly to your needs.
For practical examples on drawing and customization, don’t forget to peek at the demo project source code too!
Support YoloDotNet
YoloDotNet is the result of countless hours of development, testing, and continuous improvement — all offered freely to the community.
If you’ve found my project helpful, consider supporting its development. Your contribution helps cover the time and resources needed to keep the project maintained, updated, and freely available to everyone.
Support the project:
Whether it's a donation, sponsorship, or just spreading the word — every bit of support fuels the journey. Thank you for helping YoloDotNet grow! ❤️
References & Acknowledgements
https://github.com/ultralytics/ultralytics
https://github.com/sstainba/Yolov8.Net
https://github.com/mentalstack/yolov5-net
License
YoloDotNet is © 2023–2025 Niklas Swärd (GitHub)
Licensed under the GNU General Public License v3.0 or later.
See the LICENSE file for the full license text.
This software is provided “as is”, without warranty of any kind.
The author is not liable for any damages arising from its use.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- SkiaSharp (>= 3.119.1)
NuGet packages (6)
Showing the top 5 NuGet packages that depend on YoloDotNet:
| Package | Downloads |
|---|---|
|
Snet.Yolo.Server
识别组件:Server(Yolo多模型管理与智能识别服务,检测、定向检测、分类、分割、姿态) |
|
|
VL.YoloDotNet
YoloDotNet for VL |
|
|
YoloDotNet.ExecutionProvider.Cpu
YoloDotNet.ExecutionProvider.Cpu provides a fully portable CPU-based execution provider for YoloDotNet using ONNX Runtime’s built-in CPU backend. This execution provider requires no additional system-level dependencies and works out of the box on Windows, Linux, and macOS. It is ideal for development, testing, CI environments, and production scenarios where GPU or NPU acceleration is unavailable. The CPU provider integrates seamlessly with YoloDotNet’s modular execution provider architecture introduced in v4.0 and supports all inference tasks including object detection, segmentation, classification, pose estimation, and OBB detection. |
|
|
YoloDotNet.ExecutionProvider.Cuda
CUDA and TensorRT execution provider for YoloDotNet, enabling GPU-accelerated inference on NVIDIA hardware using ONNX Runtime. This execution provider supports CUDA for general GPU acceleration and optional NVIDIA TensorRT integration for maximum performance, lower latency, and optimized engine execution. It is designed for high-throughput and real-time inference workloads on Windows and Linux systems with supported NVIDIA GPUs. The provider is fully compatible with the YoloDotNet core library and follows the new modular, execution-provider-agnostic architecture introduced in YoloDotNet v4.0. |
|
|
YoloDotNet.ExecutionProvider.CoreML
YoloDotNet.ExecutionProvider.CoreML provides hardware-accelerated inference on macOS using Apple’s native Core ML framework. This execution provider integrates ONNX Runtime’s CoreML backend, enabling efficient inference on Apple devices by leveraging system-level machine learning acceleration. No additional runtimes, drivers, or SDKs are required beyond macOS itself. The CoreML execution provider is designed to work seamlessly with the modular YoloDotNet core library and supports the new execution-provider-agnostic architecture introduced in YoloDotNet v4.0. It is ideal for macOS applications that require fast, low-power inference using Apple’s native ML stack. |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on YoloDotNet:
| Repository | Stars |
|---|---|
|
Webreaper/Damselfly
Damselfly is a server-based Photograph Management app. The goal of Damselfly is to index an extremely large collection of images, and allow easy search and retrieval of those images, using metadata such as the IPTC keyword tags, as well as the folder and file names. Damselfly includes support for object/face detection.
|
| Version | Downloads | Last Updated |
|---|---|---|
| 4.0.0 | 384 | 12/14/2025 |
| 3.1.1 | 6,773 | 7/31/2025 |
| 3.1.0 | 276 | 7/30/2025 |
| 3.0.0 | 696 | 7/15/2025 |
| 2.3.0 | 7,473 | 3/15/2025 |
| 2.2.0 | 9,571 | 10/13/2024 |
| 2.1.0 | 1,306 | 10/6/2024 |
| 2.0.0 | 3,613 | 7/12/2024 |
| 1.7.0 | 1,145 | 5/2/2024 |
| 1.6.0 | 634 | 4/4/2024 |
| 1.5.0 | 344 | 3/14/2024 |
| 1.4.0 | 2,137 | 3/6/2024 |
| 1.3.0 | 710 | 2/25/2024 |
| 1.2.0 | 384 | 2/5/2024 |
| 1.1.0 | 322 | 1/17/2024 |
| 1.0.0 | 488 | 12/8/2023 |
YoloDotNet v4.0 is a major release that introduces a new modular architecture and significantly improves cross-platform support and maintainability.
Execution providers have been fully decoupled from YoloDotNet Core. The core package is now execution-provider agnostic and no longer ships with any ONNX Runtime native dependencies. CPU, CUDA, OpenVINO, and CoreML execution providers are distributed as separate NuGet packages and must be referenced explicitly. Exactly one execution provider should be used per project.
This architectural change is a breaking update for users upgrading from v3.x, as project setup and execution provider configuration have changed. The upside is a much cleaner architecture, fewer native dependency conflicts, and predictable behavior across platforms.
New execution providers have been added in this release. Intel OpenVINO enables optimized inference on Intel CPUs and integrated GPUs on Windows and Linux. Apple CoreML enables native, hardware-accelerated inference on macOS using Apple’s Core ML framework.
The CUDA execution provider has been improved with more explicit GPU allocation, resulting in smoother and more predictable CUDA and TensorRT performance on NVIDIA hardware.
Support for grayscale-only ONNX models has been added, allowing inference on models exported specifically for single-channel input where color data is unnecessary.
Dependencies have been updated. SkiaSharp has been updated to version 3.119.1, and ONNX Runtime for CUDA has been updated to version 1.23.2.
For detailed installation instructions and execution provider setup, see the execution providers README in the repository.