DotTorch.Core 9.0.16

There is a newer version of this package available.
See the version list below for details.
dotnet add package DotTorch.Core --version 9.0.16
                    
NuGet\Install-Package DotTorch.Core -Version 9.0.16
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="DotTorch.Core" Version="9.0.16" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="DotTorch.Core" Version="9.0.16" />
                    
Directory.Packages.props
<PackageReference Include="DotTorch.Core" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add DotTorch.Core --version 9.0.16
                    
#r "nuget: DotTorch.Core, 9.0.16"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package DotTorch.Core@9.0.16
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=DotTorch.Core&version=9.0.16
                    
Install as a Cake Addin
#tool nuget:?package=DotTorch.Core&version=9.0.16
                    
Install as a Cake Tool

DotTorch.Core — Модульное ядро для работы с тензорами и автодифференцированием

Содержание / Contents / Inhalt / 目录


Русский

DotTorch.Core — современное и высокопроизводительное ядро для работы с многомерными тензорами и автоматическим дифференцированием на платформе .NET. Пакет предназначен для разработчиков, работающих с машинным обучением и научными вычислениями, обеспечивая простой и гибкий API.

Основные возможности:

  • Поддержка многомерных тензоров с произвольной формой.
  • Расширенный broadcasting для операций и функций активации.
  • Арифметические операции: сложение, умножение, матричное умножение, возведение в степень.
  • Популярные функции активации: ReLU, Sigmoid, Tanh, SoftMax.
  • Loss-функции: MSE, CrossEntropy.
  • Автоматическое дифференцирование с вычислительным графом и обратным проходом.
  • Операции суммирования, усреднения, максимума и минимума по осям.
  • Эффективные методы изменения формы без копирования данных (reshape, view, slice).
  • Полное покрытие тестами для стабильной и надежной работы.

DotTorch.Core позволяет создавать и обучать нейросети, реализовывать сложные вычисления и легко расширять функционал под конкретные задачи, используя преимущества .NET экосистемы.


English

DotTorch.Core is a modern, high-performance core library for multidimensional tensor operations and automatic differentiation on the .NET platform. This package is designed for developers involved in machine learning and scientific computing, providing a simple yet flexible API.

Key features:

  • Support for multidimensional tensors with arbitrary shapes.
  • Advanced broadcasting for operations and activation functions.
  • Arithmetic operations: addition, multiplication, matrix multiplication, power.
  • Popular activation functions: ReLU, Sigmoid, Tanh, SoftMax.
  • Loss functions: MSE, CrossEntropy.
  • Automatic differentiation with computation graph and backward pass support.
  • Sum, mean, max, and min operations along specified axes.
  • Efficient shape manipulation methods without data copying (reshape, view, slice).
  • Comprehensive testing coverage ensuring stability and reliability.

DotTorch.Core enables building and training neural networks, implementing complex computations, and easily extending functionality leveraging the power of the .NET ecosystem.


Deutsch

DotTorch.Core ist eine moderne, leistungsstarke Kernbibliothek für multidimensionale Tensoroperationen und automatische Differenzierung auf der .NET-Plattform. Dieses Paket richtet sich an Entwickler im Bereich maschinelles Lernen und wissenschaftliches Rechnen und bietet eine einfache und flexible API.

Hauptfunktionen:

  • Unterstützung von mehrdimensionalen Tensoren mit beliebigen Formen.
  • Erweiterte Broadcasting-Unterstützung für Operationen und Aktivierungsfunktionen.
  • Arithmetische Operationen: Addition, Multiplikation, Matrixmultiplikation, Potenzierung.
  • Beliebte Aktivierungsfunktionen: ReLU, Sigmoid, Tanh, SoftMax.
  • Verlustfunktionen: MSE, CrossEntropy.
  • Automatische Differenzierung mit Rechengraph und Backward-Pass.
  • Summen-, Mittelwert-, Max- und Min-Operationen entlang bestimmter Achsen.
  • Effiziente Methoden zur Formänderung ohne Datenkopie (reshape, view, slice).
  • Umfassende Tests für Stabilität und Zuverlässigkeit.

DotTorch.Core ermöglicht den Aufbau und das Training von neuronalen Netzen, die Durchführung komplexer Berechnungen und die einfache Erweiterung der Funktionalität unter Nutzung des .NET-Ökosystems.


中文

DotTorch.Core 是一个现代高性能的 .NET 平台多维张量操作和自动微分核心库。该包面向机器学习和科学计算开发者,提供简洁灵活的 API。

主要功能:

  • 支持任意形状的多维张量。
  • 支持高级广播机制,适用于运算和激活函数。
  • 算术运算:加法、乘法、矩阵乘法、幂运算。
  • 常用激活函数:ReLU、Sigmoid、Tanh、SoftMax。
  • 损失函数:均方误差 (MSE)、交叉熵 (CrossEntropy)。
  • 自动微分,支持计算图和反向传播。
  • 支持沿指定轴的求和、均值、最大值和最小值操作。
  • 高效的形状变换方法,无需数据复制(reshape、view、slice)。
  • 全面测试覆盖,确保稳定可靠。

DotTorch.Core 使开发者能够构建和训练神经网络,执行复杂计算,并充分利用 .NET 生态系统轻松扩展功能。

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • net8.0

    • No dependencies.
  • net9.0

    • No dependencies.

NuGet packages (3)

Showing the top 3 NuGet packages that depend on DotTorch.Core:

Package Downloads
DotTorch.Losses

DotTorch Losses is the dedicated .NET 8/9 library providing a comprehensive set of loss functions for deep learning and machine learning tasks. This package integrates seamlessly with DotTorch.Core, enabling robust automatic differentiation and efficient tensor operations. The initial 9.0.0 release introduces key loss primitives such as MSE, Cross-Entropy, Binary Cross-Entropy, Huber, KL Divergence, NLL, and Hinge Loss with full support for broadcasting and reduction options.

DotTorch.Layers

DotTorch Layers is a high-performance, modular neural network layers library for .NET 8 and .NET 9. It includes core layers such as Linear, ReLU, Sequential, Dropout, Embedding, Sigmoid, SoftMax, Tanh, LeakyReLU, GELU, ELU, and Flatten. Advanced recurrent layers like RNN, LSTM, and GRU are also implemented, along with powerful Transformer layers. The package features normalization layers: LayerNorm (currently not optimized) and BatchNorm (optimized, with LayerNorm mode support). All layers seamlessly integrate with the DotTorch.Core autograd system, enabling automatic differentiation and backpropagation. Designed for ease of use, extensibility, and efficient execution on CPU and GPU devices. This library supports modern .NET frameworks and follows best practices for maintainability and performance in machine learning model construction.

DotTorch.Optimizers

DotTorch.Optimizers provides first-class implementations of gradient-based optimization algorithms for training neural networks in .NET 8 and .NET 9 environments. The library includes essential optimizers such as SGD, Momentum, RMSprop, Adam, and more. It is fully compatible with DotTorch.Core and supports dynamic computation graphs, automatic differentiation, and batched parameter updates. Optimizers can be seamlessly integrated into training loops and customized for research and production use. Designed with extensibility, testability, and high-performance execution in mind, it empowers developers to efficiently train deep learning models on CPU and GPU.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
9.2.4 203 7/14/2025
9.2.3 255 7/14/2025 9.2.3 is deprecated because it is no longer maintained and has critical bugs.
9.2.2 229 7/14/2025 9.2.2 is deprecated because it is no longer maintained and has critical bugs.
9.2.1 237 7/13/2025 9.2.1 is deprecated because it is no longer maintained and has critical bugs.
9.2.0 231 7/13/2025 9.2.0 is deprecated because it is no longer maintained and has critical bugs.
9.1.0 194 7/13/2025 9.1.0 is deprecated because it is no longer maintained and has critical bugs.
9.0.21 135 7/13/2025 9.0.21 is deprecated because it is no longer maintained and has critical bugs.
9.0.20 197 7/13/2025 9.0.20 is deprecated because it is no longer maintained and has critical bugs.
9.0.19 197 7/12/2025 9.0.19 is deprecated because it is no longer maintained and has critical bugs.
9.0.18 168 7/12/2025 9.0.18 is deprecated because it is no longer maintained and has critical bugs.
9.0.17 250 7/10/2025 9.0.17 is deprecated because it is no longer maintained and has critical bugs.
9.0.16 310 7/10/2025 9.0.16 is deprecated because it is no longer maintained and has critical bugs.
9.0.15 405 7/9/2025 9.0.15 is deprecated because it is no longer maintained and has critical bugs.
9.0.14 394 7/9/2025 9.0.14 is deprecated because it is no longer maintained and has critical bugs.
9.0.13 398 7/9/2025 9.0.13 is deprecated because it is no longer maintained and has critical bugs.
9.0.12 394 7/9/2025 9.0.12 is deprecated because it is no longer maintained and has critical bugs.
9.0.11 390 7/9/2025 9.0.11 is deprecated because it is no longer maintained and has critical bugs.
9.0.10 395 7/9/2025 9.0.10 is deprecated because it is no longer maintained and has critical bugs.
9.0.9 392 7/9/2025 9.0.9 is deprecated because it is no longer maintained and has critical bugs.
9.0.8 396 7/9/2025 9.0.8 is deprecated because it is no longer maintained and has critical bugs.
9.0.7 394 7/8/2025 9.0.7 is deprecated because it is no longer maintained and has critical bugs.
9.0.6 390 7/8/2025 9.0.6 is deprecated because it is no longer maintained and has critical bugs.
9.0.5 399 7/8/2025 9.0.5 is deprecated because it is no longer maintained and has critical bugs.
9.0.4 397 7/8/2025 9.0.4 is deprecated because it is no longer maintained and has critical bugs.
9.0.3 388 7/8/2025 9.0.3 is deprecated because it is no longer maintained and has critical bugs.
9.0.2 392 7/8/2025 9.0.2 is deprecated because it is no longer maintained and has critical bugs.
9.0.1 392 7/8/2025 9.0.1 is deprecated because it is no longer maintained and has critical bugs.
9.0.0 433 7/8/2025 9.0.0 is deprecated because it is no longer maintained and has critical bugs.

## Release Notes for DotTorch.Core 9.0.16

### Languages / Языки / 语言 / Sprachen

- [English](#english)
- [Русский](#русский)
- [中文](#中文)
- [Deutsch](#deutsch)

---

### English

This release introduces significant improvements in tensor shape manipulation, slicing, and broadcasting with enhanced autograd support.

**Key features:**
- Reshape/View with automatic dimension inference using `-1`.
- Explicit tensor broadcasting via `BroadcastToShape`.
- Flexible slicing API: slice by axis, multiple axes, or explicit indices.
- Improved autograd tracking with correct `GradNode` propagation.
- Backend-agnostic design preserving device, dtype, and gradient requirements.

**Usage examples:**

```csharp
// Reshape with automatic dimension inference
var x = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {4});
var y = x.Reshape(2, -1); // shape: [2, 2]

// Broadcast tensor
var a = Tensor.FromArray(new float[] {1, 2}, new[] {2});
var b = a.BroadcastToShape(new[] {2, 2});

// Slice by axis
var c = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {2, 2});
var d = c.Slice((0, 1, 1)); // slice along axis 0, start=1, length=1

// Slice by indices, returns scalar tensor
var e = c.Slice(1, 0); // returns scalar tensor with value 3
````

---

### Русский

В этом релизе реализованы важные улучшения в работе с формой тензоров, их срезами и трансляцией, с улучшенной поддержкой автодифференцирования.

**Основные изменения:**

* `Reshape`/`View` с автоматическим вычислением размера измерения при `-1`.
* Явное расширение (broadcast) тензоров через `BroadcastToShape`.
* Гибкий API для срезов: по оси, по нескольким осям, по индексам.
* Улучшенное отслеживание градиентов (`GradNode`) при операциях.
* Независимость от бекенда с сохранением устройства, типа и необходимости градиента.

**Примеры использования:**

```csharp
// Reshape с автоматическим выводом измерения
var x = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {4});
var y = x.Reshape(2, -1); // форма: [2, 2]

// Трансляция тензора
var a = Tensor.FromArray(new float[] {1, 2}, new[] {2});
var b = a.BroadcastToShape(new[] {2, 2});

// Срез по оси
var c = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {2, 2});
var d = c.Slice((0, 1, 1)); // срез по оси 0, старт=1, длина=1

// Срез по индексам, возвращает скалярный тензор
var e = c.Slice(1, 0); // возвращает скаляр со значением 3
```

---

### Deutsch

Dieses Release bringt bedeutende Verbesserungen bei der Formmanipulation von Tensoren, dem Slicing und Broadcasting mit erweiterter Unterstützung für Autograd.

**Wichtigste Neuerungen:**

* `Reshape`/`View` mit automatischer Dimensionsberechnung bei `-1`.
* Explizites Broadcasting von Tensoren via `BroadcastToShape`.
* Flexibles Slicing-API: Slice nach Achse, mehreren Achsen oder Indizes.
* Verbesserte Gradientenverfolgung (`GradNode`).
* Backend-unabhängiges Design mit Erhalt von Gerät, Datentyp und Gradientenanforderung.

**Beispiel:**

```csharp
// Reshape mit automatischer Dimensionsinferenz
var x = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {4});
var y = x.Reshape(2, -1); // Form: [2, 2]

// Tensor broadcasten
var a = Tensor.FromArray(new float[] {1, 2}, new[] {2});
var b = a.BroadcastToShape(new[] {2, 2});

// Slice nach Achse
var c = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {2, 2});
var d = c.Slice((0, 1, 1)); // Slice an Achse 0, Start=1, Länge=1

// Slice nach Indizes, gibt Skalar zurück
var e = c.Slice(1, 0); // gibt Skalar mit Wert 3 zurück
```

---

### 中文

本次发布对张量形状操作、切片及广播功能进行了重大改进,增强了自动求导(autograd)支持。

**主要更新内容:**

* 支持在 `Reshape`/`View` 中使用 `-1` 自动推断维度大小。
* 通过 `BroadcastToShape` 明确广播张量形状。
* 灵活的切片API:支持按轴、多个轴或索引切片。
* 改进了梯度跟踪机制(`GradNode`)。
* 保持后端无关,确保设备、数据类型及梯度需求一致。

**示例代码:**

```csharp
// 使用自动推断维度的重塑
var x = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {4});
var y = x.Reshape(2, -1); // 形状: [2, 2]

// 张量广播
var a = Tensor.FromArray(new float[] {1, 2}, new[] {2});
var b = a.BroadcastToShape(new[] {2, 2});

// 按轴切片
var c = Tensor.FromArray(new float[] {1, 2, 3, 4}, new[] {2, 2});
var d = c.Slice((0, 1, 1)); // 轴0上,从位置1开始,长度1的切片

// 按索引切片,返回标量张量
var e = c.Slice(1, 0); // 返回值为3的标量张量
```