DotTorch.Core
9.0.17
See the version list below for details.
dotnet add package DotTorch.Core --version 9.0.17
NuGet\Install-Package DotTorch.Core -Version 9.0.17
<PackageReference Include="DotTorch.Core" Version="9.0.17" />
<PackageVersion Include="DotTorch.Core" Version="9.0.17" />
<PackageReference Include="DotTorch.Core" />
paket add DotTorch.Core --version 9.0.17
#r "nuget: DotTorch.Core, 9.0.17"
#:package DotTorch.Core@9.0.17
#addin nuget:?package=DotTorch.Core&version=9.0.17
#tool nuget:?package=DotTorch.Core&version=9.0.17
DotTorch.Core — Модульное ядро для работы с тензорами и автодифференцированием
Содержание / Contents / Inhalt / 目录
Русский
DotTorch.Core — современное и высокопроизводительное ядро для работы с многомерными тензорами и автоматическим дифференцированием на платформе .NET. Пакет предназначен для разработчиков, работающих с машинным обучением и научными вычислениями, обеспечивая простой и гибкий API.
Основные возможности:
- Поддержка многомерных тензоров с произвольной формой.
- Расширенный broadcasting для операций и функций активации.
- Арифметические операции: сложение, умножение, матричное умножение, возведение в степень.
- Популярные функции активации: ReLU, Sigmoid, Tanh, SoftMax.
- Loss-функции: MSE, CrossEntropy.
- Автоматическое дифференцирование с вычислительным графом и обратным проходом.
- Операции суммирования, усреднения, максимума и минимума по осям.
- Эффективные методы изменения формы без копирования данных (reshape, view, slice).
- Полное покрытие тестами для стабильной и надежной работы.
DotTorch.Core позволяет создавать и обучать нейросети, реализовывать сложные вычисления и легко расширять функционал под конкретные задачи, используя преимущества .NET экосистемы.
English
DotTorch.Core is a modern, high-performance core library for multidimensional tensor operations and automatic differentiation on the .NET platform. This package is designed for developers involved in machine learning and scientific computing, providing a simple yet flexible API.
Key features:
- Support for multidimensional tensors with arbitrary shapes.
- Advanced broadcasting for operations and activation functions.
- Arithmetic operations: addition, multiplication, matrix multiplication, power.
- Popular activation functions: ReLU, Sigmoid, Tanh, SoftMax.
- Loss functions: MSE, CrossEntropy.
- Automatic differentiation with computation graph and backward pass support.
- Sum, mean, max, and min operations along specified axes.
- Efficient shape manipulation methods without data copying (reshape, view, slice).
- Comprehensive testing coverage ensuring stability and reliability.
DotTorch.Core enables building and training neural networks, implementing complex computations, and easily extending functionality leveraging the power of the .NET ecosystem.
Deutsch
DotTorch.Core ist eine moderne, leistungsstarke Kernbibliothek für multidimensionale Tensoroperationen und automatische Differenzierung auf der .NET-Plattform. Dieses Paket richtet sich an Entwickler im Bereich maschinelles Lernen und wissenschaftliches Rechnen und bietet eine einfache und flexible API.
Hauptfunktionen:
- Unterstützung von mehrdimensionalen Tensoren mit beliebigen Formen.
- Erweiterte Broadcasting-Unterstützung für Operationen und Aktivierungsfunktionen.
- Arithmetische Operationen: Addition, Multiplikation, Matrixmultiplikation, Potenzierung.
- Beliebte Aktivierungsfunktionen: ReLU, Sigmoid, Tanh, SoftMax.
- Verlustfunktionen: MSE, CrossEntropy.
- Automatische Differenzierung mit Rechengraph und Backward-Pass.
- Summen-, Mittelwert-, Max- und Min-Operationen entlang bestimmter Achsen.
- Effiziente Methoden zur Formänderung ohne Datenkopie (reshape, view, slice).
- Umfassende Tests für Stabilität und Zuverlässigkeit.
DotTorch.Core ermöglicht den Aufbau und das Training von neuronalen Netzen, die Durchführung komplexer Berechnungen und die einfache Erweiterung der Funktionalität unter Nutzung des .NET-Ökosystems.
中文
DotTorch.Core 是一个现代高性能的 .NET 平台多维张量操作和自动微分核心库。该包面向机器学习和科学计算开发者,提供简洁灵活的 API。
主要功能:
- 支持任意形状的多维张量。
- 支持高级广播机制,适用于运算和激活函数。
- 算术运算:加法、乘法、矩阵乘法、幂运算。
- 常用激活函数:ReLU、Sigmoid、Tanh、SoftMax。
- 损失函数:均方误差 (MSE)、交叉熵 (CrossEntropy)。
- 自动微分,支持计算图和反向传播。
- 支持沿指定轴的求和、均值、最大值和最小值操作。
- 高效的形状变换方法,无需数据复制(reshape、view、slice)。
- 全面测试覆盖,确保稳定可靠。
DotTorch.Core 使开发者能够构建和训练神经网络,执行复杂计算,并充分利用 .NET 生态系统轻松扩展功能。
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- No dependencies.
-
net9.0
- No dependencies.
NuGet packages (3)
Showing the top 3 NuGet packages that depend on DotTorch.Core:
Package | Downloads |
---|---|
DotTorch.Losses
DotTorch Losses is the dedicated .NET 8/9 library providing a comprehensive set of loss functions for deep learning and machine learning tasks. This package integrates seamlessly with DotTorch.Core, enabling robust automatic differentiation and efficient tensor operations. The initial 9.0.0 release introduces key loss primitives such as MSE, Cross-Entropy, Binary Cross-Entropy, Huber, KL Divergence, NLL, and Hinge Loss with full support for broadcasting and reduction options. |
|
DotTorch.Layers
DotTorch Layers is a high-performance, modular neural network layers library for .NET 8 and .NET 9. It includes core layers such as Linear, ReLU, Sequential, Dropout, Embedding, Sigmoid, SoftMax, Tanh, LeakyReLU, GELU, ELU, and Flatten. Advanced recurrent layers like RNN, LSTM, and GRU are also implemented, along with powerful Transformer layers. The package features normalization layers: LayerNorm (currently not optimized) and BatchNorm (optimized, with LayerNorm mode support). All layers seamlessly integrate with the DotTorch.Core autograd system, enabling automatic differentiation and backpropagation. Designed for ease of use, extensibility, and efficient execution on CPU and GPU devices. This library supports modern .NET frameworks and follows best practices for maintainability and performance in machine learning model construction. |
|
DotTorch.Optimizers
DotTorch.Optimizers provides first-class implementations of gradient-based optimization algorithms for training neural networks in .NET 8 and .NET 9 environments. The library includes essential optimizers such as SGD, Momentum, RMSprop, Adam, and more. It is fully compatible with DotTorch.Core and supports dynamic computation graphs, automatic differentiation, and batched parameter updates. Optimizers can be seamlessly integrated into training loops and customized for research and production use. Designed with extensibility, testability, and high-performance execution in mind, it empowers developers to efficiently train deep learning models on CPU and GPU. |
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last Updated | |
---|---|---|---|
9.2.4 | 204 | 7/14/2025 | |
9.2.3 | 256 | 7/14/2025 | |
9.2.2 | 230 | 7/14/2025 | |
9.2.1 | 238 | 7/13/2025 | |
9.2.0 | 232 | 7/13/2025 | |
9.1.0 | 195 | 7/13/2025 | |
9.0.21 | 136 | 7/13/2025 | |
9.0.20 | 198 | 7/13/2025 | |
9.0.19 | 198 | 7/12/2025 | |
9.0.18 | 169 | 7/12/2025 | |
9.0.17 | 251 | 7/10/2025 | |
9.0.16 | 311 | 7/10/2025 | |
9.0.15 | 406 | 7/9/2025 | |
9.0.14 | 395 | 7/9/2025 | |
9.0.13 | 399 | 7/9/2025 | |
9.0.12 | 395 | 7/9/2025 | |
9.0.11 | 391 | 7/9/2025 | |
9.0.10 | 396 | 7/9/2025 | |
9.0.9 | 393 | 7/9/2025 | |
9.0.8 | 397 | 7/9/2025 | |
9.0.7 | 395 | 7/8/2025 | |
9.0.6 | 392 | 7/8/2025 | |
9.0.5 | 400 | 7/8/2025 | |
9.0.4 | 398 | 7/8/2025 | |
9.0.3 | 389 | 7/8/2025 | |
9.0.2 | 393 | 7/8/2025 | |
9.0.1 | 393 | 7/8/2025 | |
9.0.0 | 434 | 7/8/2025 |
RU:
- Добавлены методы `Tensor.Concat` и `Tensor.SetSlice`, реализованные через бэкенд.
- `Tensor.Reshape` перенесён в бэкенд `CpuBackend`.
- Обновлены Slice-перегрузки и привязаны к BackendRegistry.
- Исправлено поведение `Tensor.Slice` при индексировании с использованием списка индексов:
теперь вызов `Slice(int[] indices)` корректно возвращает скаляр, а вызовы `Slice(int axis, int start, int length)`
возвращают срезы. Это устраняет неоднозначность перегрузок и предотвращает неверные формы результата.
- Улучшена производительность `UnravelIndex` при массовом вызове — оптимизированы вычисления индексов для снижения затрат при массовых операциях с broadcasting.
- Оптимизированы методы с broadcasting (Add, Mul, Sub, Div и др.) для снижения количества аллокаций и повторных вычислений strides.
- Использован массив индексов `idx` для переиспользования при распаковке индексов, что снижает нагрузку на сборщик мусора и повышает кэш-локальность.
- Переработан цикл batched MatMul с использованием предварительно вычисленных strides и offset-ов для максимальной скорости.
- Переписан метод `Transpose` с применением новой перегрузки `UnravelIndex` с передачей strides и переиспользованием массива индексов для уменьшения аллокаций и повышения производительности.
CH:
- 添加了通过后端实现的 `Tensor.Concat` 和 `Tensor.SetSlice` 方法。
- `Tensor.Reshape` 迁移至后端 `CpuBackend` 实现。
- 重构了 Slice 方法,并使用了 BackendRegistry。
- 修正了 `Tensor.Slice` 在使用索引数组时的行为:`Slice(int[] indices)` 正确返回标量,而 `Slice(int axis, int start, int length)` 返回切片,避免了重载歧义,防止返回错误形状。
- 优化了批量调用 `UnravelIndex` 的性能 — 通过重用索引数组和减少计算步骤来提升广播操作效率。
- 优化了带广播的运算方法(加法、乘法、减法、除法等),减少内存分配和重复计算步长。
- 使用复用的索引数组 `idx` 来降低垃圾回收压力和提高缓存命中率。
- 重新设计了批处理矩阵乘法,使用预计算的步长和偏移,提升性能。
- 重写了 `Transpose` 方法,使用了带步长参数的 `UnravelIndex` 重载和可复用索引数组,减少分配,提高性能。
DE:
- Methoden `Tensor.Concat` und `Tensor.SetSlice` über das Backend hinzugefügt.
- `Tensor.Reshape` wurde in das Backend `CpuBackend` verlagert.
- Alle Slice-Überladungen wurden auf das Backend umgestellt.
- Verhalten von `Tensor.Slice` bei Indizierung mit Indexlisten korrigiert: `Slice(int[] indices)` gibt nun korrekt einen Skalar zurück, während `Slice(int axis, int start, int length)` Slices zurückgibt. Dies behebt Überladungszweideutigkeiten und verhindert falsche Formen.
- `UnravelIndex` wurde für hohe Aufrufzahlen optimiert — durch Wiederverwendung von Indexarrays und Reduktion der Berechnungsschritte für Broadcasting.
- Optimierung der Broadcasting-Methoden (Add, Mul, Sub, Div etc.) zur Verringerung von Allokationen und Berechnung von Strides.
- Verwendung eines wiederverwendbaren Index-Arrays `idx` zur Reduzierung der Garbage Collection und Verbesserung der Cache-Lokalität.
- Umgestaltung der batched MatMul-Schleifen mit vorab berechneten Strides und Offsets für maximale Geschwindigkeit.
- Methode `Transpose` neu geschrieben mit neuer `UnravelIndex`-Überladung inkl. Strides und Wiederverwendung des Index-Arrays zur Reduktion von Allokationen und Leistungssteigerung.
EN:
- Added `Tensor.Concat` and `Tensor.SetSlice`, now backend-powered.
- `Tensor.Reshape` has been moved to `CpuBackend`.
- Slice overloads now route through `BackendRegistry`.
- Fixed `Tensor.Slice` behavior for indexing with a list of indices: `Slice(int[] indices)` now correctly returns a scalar, while `Slice(int axis, int start, int length)` returns slices. This resolves overload ambiguities and prevents incorrect output shapes.
- Optimized `UnravelIndex` for high-frequency calls — by reusing index arrays and reducing computation steps, improving broadcasting efficiency.
- Optimized broadcasting operations (Add, Mul, Sub, Div, etc.) to reduce allocations and redundant strides calculations.
- Used reusable index array `idx` to lower garbage collection pressure and improve cache locality.
- Refactored batched MatMul loops with precomputed strides and offsets for maximal performance.
- Rewrote `Transpose` method using new `UnravelIndex` overload with strides and reusable index arrays to minimize allocations and boost performance.